Skip to content

Commit

Permalink
Merge pull request #2004 from cnti-testcatalog/kubescape-version-check
Browse files Browse the repository at this point in the history
[1993, 1999, 2003] Upgrade Kubescape to v3.0.8 and fix affected tests
  • Loading branch information
agentpoyo authored Apr 25, 2024
2 parents 4f84477 + b8b655f commit 56ab664
Show file tree
Hide file tree
Showing 21 changed files with 151 additions and 135 deletions.
10 changes: 5 additions & 5 deletions RATIONALE.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,8 +276,11 @@ In order to prevent illegitimate escalation by processes and restrict a processe
#### *To check if security services are being used to harden containers*: [linux_hardening](docs/LIST_OF_TESTS.md#linux-hardening)
> In order to reduce the attack surface, it is recommend, when it is possible, to harden your application using [security services](https://hub.armo.cloud/docs/c-0055) such as SELinux®, AppArmor®, and seccomp. Starting from Kubernetes version 1.22, SELinux is enabled by default.
#### *To check if containers have resource limits defined*: [resource_policies](docs/LIST_OF_TESTS.md#resource-policies)
> CPU and memory [resources should have a limit](https://hub.armo.cloud/docs/c-0009) set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without resource limit definitions by checking thier yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
#### *To check if containers have CPU limits defined*: [cpu_limits](docs/LIST_OF_TESTS.md#cpu-limits)
> Every container [should have a limit set for the CPU available for it](https://hub.armo.cloud/docs/c-0270) set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without CPU limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
#### *To check if containers have memory limits defined*: [memory_limits](docs/LIST_OF_TESTS.md#memory-limits)
> Every container [should have a limit set for the memory available for it](https://hub.armo.cloud/docs/c-0271) set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without memory limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
#### *To check if containers have immutable file systems*: [immutable_file_systems](docs/LIST_OF_TESTS.md#immutable-file-systems)
> Mutable container filesystem can be abused to gain malicious code and data injection into containers. By default, containers are permitted unrestricted execution within their own context. An attacker who has access to a container, [can create files](https://hub.armo.cloud/docs/c-0017) and download scripts as they wish, and modify the underlying application running on the container.
Expand Down Expand Up @@ -376,8 +379,5 @@ closing watches for ConfigMaps marked as immutable.*"
#### *Check if the plateform is using insecure ports for the API server*: [Control_plane_hardening](docs/LIST_OF_TESTS.md#control-plane-hardening)
> *The control plane is the core of Kubernetes and gives users the ability to view containers, schedule new Pods, read Secrets, and execute commands in the cluster. Therefore, it should be protected. It is recommended to avoid control plane exposure to the Internet or to an untrusted network and require TLS encryption.
#### *Check if the Dashboard is exposed externally*: [Dashboard exposed](docs/LIST_OF_TESTS.md#dashboard-exposed)
> * If Kubernetes dashboard is exposed externally in Dashboard versions before 2.01, it will allow unauthenticated remote management of the cluster. It's best practice to not expose the K8s Dashboard or any management planes if they're unsecured.
#### *Check if Tiller is being used on the plaform*: [Tiller images](docs/LIST_OF_TESTS.md#tiller-images)
> *Tiller, found in Helm v2, has known security challenges. It requires administrative privileges and acts as a shared resource accessible to any authenticated user. Tiller can lead to privilege escalation as restricted users can impact other users. It is recommend to use Helm v3+ which does not contain Tiller for these reasons
3 changes: 2 additions & 1 deletion TEST-CATEGORIES.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,8 @@ The CNTI Test Catalog validates interoperability of CNF **workloads** supplied b
- Checks for non root containers.
- Checks PID and IPC privileges.
- Checks for Linux Hardening, eg. Selinux is used.
- Checks resource policies defined.
- Checks memory limits are defined.
- Checks CPU limits are defined.
- Checks for immutable file systems.
- Verifies and checks if any hostpath mounts are used.

Expand Down
34 changes: 17 additions & 17 deletions USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -998,20 +998,33 @@ Use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict con



## [Resource policies](docs/LIST_OF_TESTS.md#resource-policies)
## [CPU limits](docs/LIST_OF_TESTS.md#cpu-limits)

##### To run the Resource policies test, you can use the following command:
##### To run the CPU limits test, you can use the following command:
```
./cnf-testsuite resource_policies
./cnf-testsuite cpu_limits
```

<b>Remediation for failing this test:</b>

Define LimitRange and ResourceQuota policies to limit resource usage for namespaces or in the deployment/POD yamls.
Define LimitRange and ResourceQuota policies to limit CPU usage for namespaces or in the deployment/POD yamls.

</b>


## [Memory limits](docs/LIST_OF_TESTS.md#memory-limits)

##### To run the memory limits test, you can use the following command:
```
./cnf-testsuite memory_limits
```

<b>Remediation for failing this test:</b>

Define LimitRange and ResourceQuota policies to limit memory usage for namespaces or in the deployment/POD yamls.

</b>


## [Immutable File Systems](docs/LIST_OF_TESTS.md#immutable-file-systems)

Expand Down Expand Up @@ -1369,19 +1382,6 @@ See more at [ARMO-C0005](https://bit.ly/C0005_Control_Plane)
./cnf-testsuite platform:control_plane_hardening
```

## [Dashboard exposed](docs/LIST_OF_TESTS.md#dashboard-exposed)

##### To run the Dashboard exposed test, you can use the following command:
```
./cnf-testsuite platform:exposed_dashboard
```

<b>Remediation for failing this test: </b>

Update dashboard version to v2.0.1 or above.

</b>


## [Tiller images](docs/LIST_OF_TESTS.md#tiller-images)

Expand Down
11 changes: 0 additions & 11 deletions docs/LIST_OF_TESTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -814,17 +814,6 @@ List of Platform Tests
[**Rationale & Reasoning**](../RATIONALE.md#check-if-the-plateform-is-using-insecure-ports-for-the-api-server-control_plane_hardening)


## [Dashboard exposed](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/security.cr#L54)
- Expectation: The K8s dashboard should not exposed to the public internet when the software version is older than v2.0.1

**What's tested:** Checks if Kubernetes dashboard exists and exposed externally as a service (nodeport/loadbalancer) and if the software version of the container image is older than v2.0.1.

[**Usage**](../USAGE.md#dashboard-exposed)

[**Rationale & Reasoning**](../RATIONALE.md#check-if-the-dashboard-is-exposed-externally-dashboard-exposed)



## [Tiller images](https://github.com/cnti-testcatalog/testsuite/blob/v0.27.0/src/tasks/platform/security.cr#L75)
- Added in release v0.27.0
- Expectation: The platform should be using Helm v3+ without Tiller.
Expand Down
10 changes: 6 additions & 4 deletions embedded_files/points.yml
Original file line number Diff line number Diff line change
Expand Up @@ -249,9 +249,6 @@
- name: cluster_admin
emoji: "🔓🔑"
tags: ["platform", "platform:security", "dynamic"]
- name: exposed_dashboard
emoji: "🔓🔑"
tags: ["platform", "platform:security", "dynamic"]
- name: kube_state_metrics
emoji: "📶☠"
tags: [platform, "platform:observability", dynamic]
Expand Down Expand Up @@ -289,7 +286,12 @@
pass: 1
fail: 0

- name: resource_policies
- name: cpu_limits
emoji: "🔓🔑"
tags: [security, dynamic, workload, cert, essential]
pass: 100

- name: memory_limits
emoji: "🔓🔑"
tags: [security, dynamic, workload, cert, essential]
pass: 100
Expand Down
7 changes: 7 additions & 0 deletions sample-cnfs/sample-nonroot/cnf-testsuite.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
manifest_directory: manifests
release_name: nginx
helm_repository:
name:
repo_url:
allowlist_helm_chart_container_names: []
15 changes: 15 additions & 0 deletions sample-cnfs/sample-nonroot/manifests/pod.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000 # we make sure this is greater than 999 and
runAsGroup: 3000 # This value is greater than 999
fsGroup: 2000
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsNonRoot: false # alternatively, this can be runAsNonRoot: true
2 changes: 1 addition & 1 deletion shard.lock
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ shards:

kubectl_client:
git: https://github.com/cnf-testsuite/kubectl_client.git
version: 1.0.5
version: 1.0.6

popcorn:
git: https://github.com/icyleaf/popcorn.git
Expand Down
2 changes: 1 addition & 1 deletion shard.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ dependencies:
version: ~> 1.0.0
kubectl_client:
github: cnf-testsuite/kubectl_client
version: ~> 1.0.5
version: ~> 1.0.6
cluster_tools:
github: cnf-testsuite/cluster_tools
version: ~> 1.0.0
Expand Down
4 changes: 2 additions & 2 deletions spec/platform/platform_spec.cr
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ describe "Platform" do
(/PASSED: K8s conformance test has no failures/ =~ response_s).should_not be_nil
end

it "individual tasks like 'platform:exposed_dashboard' should not require an installed cnf to run", tags: ["platform"] do
response_s = `./cnf-testsuite platform:exposed_dashboard`
it "individual tasks like 'platform:control_plane_hardening' should not require an installed cnf to run", tags: ["platform"] do
response_s = `./cnf-testsuite platform:control_plane_hardening`
LOGGING.info response_s
(/You must install a CNF first./ =~ response_s).should be_nil
end
Expand Down
43 changes: 2 additions & 41 deletions spec/platform/security_spec.cr
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ describe "Platform" do
it "'control_plane_hardening' should pass if the control plane has been hardened", tags: ["platform:security"] do
response_s = `./cnf-testsuite platform:control_plane_hardening`
Log.info { response_s }
(/(PASSED: Control plane hardened)/ =~ response_s).should_not be_nil
(/(PASSED: Insecure port of Kubernetes API server is not enabled)/ =~ response_s).should_not be_nil
end

it "'cluster_admin' should fail on a cnf that uses a cluster admin binding", tags: ["platform:security"] do
Expand All @@ -21,51 +21,12 @@ describe "Platform" do
response_s = `./cnf-testsuite platform:cluster_admin`
LOGGING.info response_s
$?.success?.should be_true
(/FAILED: Users with cluster admin role found/ =~ response_s).should_not be_nil
(/FAILED: Users with cluster-admin RBAC permissions found/ =~ response_s).should_not be_nil
# ensure
# `./cnf-testsuite cnf_cleanup cnf-config=./sample-cnfs/sample-privilege-escalation/cnf-testsuite.yml`
end
end

it "'exposed_dashboard' should fail when the Kubernetes dashboard is exposed", tags: ["platform:security"] do
dashboard_install_url = "https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml"
begin
# Run the exposed_dashboard test to confirm no vulnerability before dashboard is installed
response_s = `./cnf-testsuite platform:exposed_dashboard`
(/PASSED: No exposed dashboard found in the cluster/ =~ response_s).should_not be_nil

# Install the dashboard version 2.0.0.
# According to the kubescape rule, anything less than v2.0.1 would fail.
KubectlClient::Apply.file(dashboard_install_url)

# Construct patch spec to expose Kubernetes Dashboard on a Node Port
patch_spec = {
spec: {
type: "NodePort",
ports: [
{
nodePort: 30500,
port: 443,
protocol: "TCP",
targetPort: 8443
}
]
}
}
# Apply the patch to expose the dashboard on the NodePort
result = KubectlClient::Patch.spec("service", "kubernetes-dashboard", patch_spec.to_json, "kubernetes-dashboard")

# Run the test again to confirm vulnerability with an exposed dashboard
response_s = `./cnf-testsuite platform:exposed_dashboard`
Log.info { response_s }
$?.success?.should be_true
(/FAILED: Found exposed dashboard in the cluster/ =~ response_s).should_not be_nil
ensure
# Ensure to remove the Kubectl dashboard after the test
KubectlClient::Delete.file(dashboard_install_url)
end
end

it "'helm_tiller' should fail if Helm Tiller is running in the cluster", tags: ["platform:security"] do
ShellCmd.run("kubectl run tiller --image=rancher/tiller:v2.11.0", "create_tiller")
KubectlClient::Get.resource_wait_for_install("pod", "tiller")
Expand Down
2 changes: 1 addition & 1 deletion spec/utils/cnf_manager_spec.cr
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ describe "SampleUtils" do

it "'CNFManager::Points.all_task_test_names' should return all tasks names", tags: ["points"] do
CNFManager::Points.clean_results_yml
tags = ["alpha_k8s_apis", "application_credentials", "cni_compatible", "container_sock_mounts", "database_persistence", "default_namespace", "disk_fill", "elastic_volumes", "external_ips", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "helm_chart_published", "helm_chart_valid", "helm_deploy", "host_network", "host_pid_ipc_privileges", "hostpath_mounts", "hostport_not_used", "immutable_configmap", "immutable_file_systems", "increase_decrease_capacity", "ingress_egress_blocked", "insecure_capabilities", "ip_addresses", "latest_tag", "linux_hardening", "liveness", "log_output", "no_local_volume_configuration", "node_drain", "nodeport_not_used", "non_root_containers", "open_metrics", "operator_installed", "oran_e2_connection", "pod_delete", "pod_dns_error", "pod_io_stress", "pod_memory_hog", "pod_network_corruption", "pod_network_duplication", "pod_network_latency", "privilege_escalation", "privileged", "privileged_containers", "prometheus_traffic", "readiness", "reasonable_image_size", "reasonable_startup_time", "require_labels", "resource_policies", "rollback", "rolling_downgrade", "rolling_update", "rolling_version_change", "routed_logs", "secrets_used", "selinux_options", "service_account_mapping", "service_discovery", "shared_database", "sig_term_handled", "single_process_type", "smf_upf_heartbeat", "specialized_init_system", "suci_enabled", "symlink_file_system", "sysctls", "tracing", "versioned_tag", "volume_hostpath_not_found", "zombie_handled"]
tags = ["alpha_k8s_apis", "application_credentials", "cni_compatible", "container_sock_mounts", "database_persistence", "default_namespace", "disk_fill", "elastic_volumes", "external_ips", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "helm_chart_published", "helm_chart_valid", "helm_deploy", "host_network", "host_pid_ipc_privileges", "hostpath_mounts", "hostport_not_used", "immutable_configmap", "immutable_file_systems", "increase_decrease_capacity", "ingress_egress_blocked", "insecure_capabilities", "ip_addresses", "latest_tag", "linux_hardening", "liveness", "log_output", "no_local_volume_configuration", "node_drain", "nodeport_not_used", "non_root_containers", "open_metrics", "operator_installed", "oran_e2_connection", "pod_delete", "pod_dns_error", "pod_io_stress", "pod_memory_hog", "pod_network_corruption", "pod_network_duplication", "pod_network_latency", "privilege_escalation", "privileged", "privileged_containers", "prometheus_traffic", "readiness", "reasonable_image_size", "reasonable_startup_time", "require_labels", "cpu_limits", "memory_limits", "rollback", "rolling_downgrade", "rolling_update", "rolling_version_change", "routed_logs", "secrets_used", "selinux_options", "service_account_mapping", "service_discovery", "shared_database", "sig_term_handled", "single_process_type", "smf_upf_heartbeat", "specialized_init_system", "suci_enabled", "symlink_file_system", "sysctls", "tracing", "versioned_tag", "volume_hostpath_not_found", "zombie_handled"]
(CNFManager::Points.all_task_test_names()).sort.should eq(tags.sort)
end

Expand Down
23 changes: 18 additions & 5 deletions spec/workload/security_spec.cr
Original file line number Diff line number Diff line change
Expand Up @@ -157,14 +157,27 @@ describe "Security" do
end
end

it "'resource_policies' should pass on a cnf that has containers with resource limits defined", tags: ["security"] do
it "'cpu_limits' should pass on a cnf that has containers with cpu limits set", tags: ["security"] do
begin
LOGGING.info `./cnf-testsuite cnf_setup cnf-config=./sample-cnfs/sample-coredns-cnf`
$?.success?.should be_true
response_s = `./cnf-testsuite resource_policies`
response_s = `./cnf-testsuite cpu_limits`
LOGGING.info response_s
$?.success?.should be_true
(/PASSED: Containers have resource limits defined/ =~ response_s).should_not be_nil
(/PASSED: Containers have CPU limits set/ =~ response_s).should_not be_nil
ensure
`./cnf-testsuite cnf_cleanup cnf-config=./sample-cnfs/sample-coredns-cnf`
end
end

it "'memory_limits' should pass on a cnf that has containers with memory limits set", tags: ["security"] do
begin
LOGGING.info `./cnf-testsuite cnf_setup cnf-config=./sample-cnfs/sample-coredns-cnf`
$?.success?.should be_true
response_s = `./cnf-testsuite memory_limits`
LOGGING.info response_s
$?.success?.should be_true
(/PASSED: Containers have memory limits set/ =~ response_s).should_not be_nil
ensure
`./cnf-testsuite cnf_cleanup cnf-config=./sample-cnfs/sample-coredns-cnf`
end
Expand Down Expand Up @@ -198,14 +211,14 @@ describe "Security" do

it "'non_root_containers' should pass on a cnf that does not have containers running with root user or user with root group memberships", tags: ["security"] do
begin
LOGGING.info `./cnf-testsuite cnf_setup cnf-config=./sample-cnfs/sample-nonroot-containers`
LOGGING.info `./cnf-testsuite cnf_setup cnf-config=./sample-cnfs/sample-nonroot`
$?.success?.should be_true
response_s = `./cnf-testsuite non_root_containers`
LOGGING.info response_s
$?.success?.should be_true
(/FAILED: Found containers running with root user or user with root group membership/ =~ response_s).should be_nil
ensure
`./cnf-testsuite cnf_cleanup cnf-config=./sample-cnfs/sample-nonroot-containers`
`./cnf-testsuite cnf_cleanup cnf-config=./sample-cnfs/sample-nonroot`
end
end

Expand Down
2 changes: 0 additions & 2 deletions src/tasks/cert/cert.cr
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@ require "totem"
require "../utils/utils.cr"

desc "The CNF Test Suite program certifies a CNF based on passing some percentage of essential tests."
#task "cert", ["cert_security"] do |_, args|
task "cert", ["version", "cert_compatibility", "cert_state", "cert_security", "cert_configuration", "cert_observability", "cert_microservice", "cert_resilience"] do |_, args|
# task "cert", ["cert_compatibility", "cert_state", "cert_security", "cert_configuration", "cert_observability", "cert_microservice", "cert_resilience", "latest_tag", "selinux_options", "single_process_type", "node_drain","liveness", "readiness", "log_output", "container_sock_mounts", "privileged_containers", "non_root_containers", "resource_policies", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration"] do |_, args|
VERBOSE_LOGGING.info "cert" if check_verbose(args)

stdout_success "RESULTS SUMMARY"
Expand Down
3 changes: 2 additions & 1 deletion src/tasks/cert/cert_security.cr
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ task "cert_security", [
"symlink_file_system",
"privilege_escalation",
"insecure_capabilities",
"resource_policies",
"cpu_limits",
"memory_limits",
"linux_hardening",
"ingress_egress_blocked",
"host_pid_ipc_privileges",
Expand Down
2 changes: 1 addition & 1 deletion src/tasks/constants.cr
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ NA = "na"
DEFAULT_POINTSFILENAME = "points_v1.yml"
PRIVILEGED_WHITELIST_CONTAINERS = ["chaos-daemon", "cluster-tools"]
SONOBUOY_K8S_VERSION = "0.56.14"
KUBESCAPE_VERSION = "2.0.158"
KUBESCAPE_VERSION = "3.0.8"
KUBESCAPE_FRAMEWORK_VERSION = "1.0.316"
KIND_VERSION = "0.17.0"
SONOBUOY_OS = "linux"
Expand Down
Loading

0 comments on commit 56ab664

Please sign in to comment.