Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add e2e testing base on docker-compose #647

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

LinuxSuRen
Copy link

@LinuxSuRen LinuxSuRen commented Nov 20, 2023

I'm not familar with drone. So, just provide a GitHub Action script to run the testing. Please comment below if you have any suggestions about the e2e testing. Consider there are a lot of APIs, I just added some of them.

By the way, the following project (drone plugin) was archived.

close #646

the related issue is #9

@LinuxSuRen
Copy link
Author

hi @JacieChao I'm wondering if you have free time to review this PR.

@JacieChao
Copy link
Collaborator

Thanks for your contribution @LinuxSuRen
I will add this to my schedule when I have time.

test/e2e/testcase.yaml Outdated Show resolved Hide resolved
test/e2e/testcase.yaml Outdated Show resolved Hide resolved
test/e2e/testcase.yaml Show resolved Hide resolved
Copy link
Collaborator

@Jason-ZW Jason-ZW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@orangedeng
Copy link
Member

By the way, the following project (drone plugin) was archived.

Thanks for the information. The author of the buildx plugin is using woodpecker-ci (which is a community fork of drone) instead of drone. I think the plugin works fine in the near future but we will find the alternative way to do so.

@Jason-ZW Github action is fine but I think it is better to reuse drone to do e2e testing in this project. Or just do e2e testing in tag/rc stage to reduce the Github action workflow runs.

From my opinion, It would be better to define e2e test target in Makefile and setup test environment in Dockerfile.dapper.

@Jason-ZW
Copy link
Collaborator

Jason-ZW commented Dec 4, 2023

Yes, We have not enough quota to do pull request GitHub action e2e test, reuse drone and do e2e testing in tag stage is better.

Dapper has a certain learning and usage cost for contributors, perhaps @JacieChao @orangedeng you guys can help optimize the PR with use Dapper.

@LinuxSuRen
Copy link
Author

By the way, running e2e test in GitHub Action is pretty quick. Below is an example. It only takes around 3m. So, it might do not need a lot quota for that.

https://github.com/halo-dev/halo/actions/runs/7081623919/job/19271232227

@orangedeng
Copy link
Member

The following logs are run with docker-compose.yml with autok3s v0.9.1. I am not sure it is passed or not.

testcase.yaml pattern 1
found suites: 1
2023-12-05T02:39:36Z    ERROR   tracer.go:186   open stream error rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:11800: connect: connection refused"      {"SW_CTX": "[Your_ApplicationName,[email protected],N/A,N/A,-1]"}
2023-12-05T02:39:36Z    ERROR   tracer.go:186   report serviceInstance properties error rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:11800: connect: connection refused"        {"SW_CTX": "[Your_ApplicationName,[email protected],N/A,N/A,-1]"}
2023-12-05T02:39:36Z    ERROR   tracer.go:186   fetch dynamic configuration error rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:11800: connect: connection refused"      {"SW_CTX": "[Your_ApplicationName,[email protected],N/A,N/A,-1]"}
2023-12-05T02:39:36Z    ERROR   tracer.go:186   open stream error rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:11800: connect: connection refused"      {"SW_CTX": "[Your_ApplicationName,[email protected],N/A,N/A,-1]"}
2023-12-05T02:39:36Z    ERROR   tracer.go:186   open stream error rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:11800: connect: connection refused"      {"SW_CTX": "[Your_ApplicationName,[email protected],N/A,N/A,-1]"}
map[data:map[actions:map[] createTypes:map[cluster:http://autok3s:8080/v1/clusters] data:[map[actions:map[disable-explorer:http://autok3s:8080/v1/clusters/k3d-wxVY?action=disable-explorer download-kubeconfig:http://autok3s:8080/v1/clusters/k3d-wxVY?action=download-kubeconfig enable-explorer:http://autok3s:8080/v1/clusters/k3d-wxVY?action=enable-explorer join:http://autok3s:8080/v1/clusters/k3d-wxVY?action=join upgrade:http://autok3s:8080/v1/clusters/k3d-wxVY?action=upgrade] id:k3d-wxVY links:map[nodes:http://autok3s:8080/v1/clusters/k3d-wxVY?link=nodes remove:http://autok3s:8080/v1/clusters/k3d-wxVY self:http://autok3s:8080/v1/clusters/k3d-wxVY] master:1 name:wxVY provider:k3d status:Failed type:cluster worker:0] map[actions:map[disable-explorer:http://autok3s:8080/v1/clusters/fCRX?action=disable-explorer download-kubeconfig:http://autok3s:8080/v1/clusters/fCRX?action=download-kubeconfig enable-explorer:http://autok3s:8080/v1/clusters/fCRX?action=enable-explorer join:http://autok3s:8080/v1/clusters/fCRX?action=join upgrade:http://autok3s:8080/v1/clusters/fCRX?action=upgrade] id:fCRX links:map[nodes:http://autok3s:8080/v1/clusters/fCRX?link=nodes remove:http://autok3s:8080/v1/clusters/fCRX self:http://autok3s:8080/v1/clusters/fCRX] master:1 name:fCRX provider:native status:Failed type:cluster worker:0]] links:map[self:http://autok3s:8080/v1/clusters] resourceType:cluster type:collection]] == len(data.data) == 2
map[data:map[actions:map[] createTypes:map[sshKey:http://autok3s:8080/v1/sshKeys] data:[map[actions:map[export:http://autok3s:8080/v1/sshKeys/zurE?action=export] has-password:false id:zurE links:map[remove:http://autok3s:8080/v1/sshKeys/zurE self:http://autok3s:8080/v1/sshKeys/zurE] name:zurE ssh-key-public:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDmQJLRFdAfRpzBqsPZydIFlsNc1dO+FvCJdH0Ag5tB6Km0GPFWfFFg/A+xTqFTQGjtkc6+XXAZREjmiBdHI3PCDnCS+yj9oezwDPfC/hmWGRzr8t3CuIzdCyXxSlAiMz2dvLr8USho3ixyzrGNVdUssBFWgMW0fNje4zzAqXEFWRGZLIbjNabhTgedBxBjEr3uV0o721sAFuK+15aGgD92MjiawM4k0TvQHXnf++d7dhe3je3AhFEkjIUkA4YO2J+bmUJN79CgbGCz1xh/l6gFS2YcrWbg5E/FYxMscm9m6JJ/4X/fflyBY9MfyB+KAqAYbdYPYyoM5MS4eq8Oy557
 type:sshKey]] links:map[self:http://autok3s:8080/v1/sshKeys] resourceType:sshKey type:collection]] == len(data.data) == 1
map[data:map[actions:map[] createTypes:map[addon:http://autok3s:8080/v1/addons] data:[map[description:Default Rancher Manager add-on id:rancher links:map[remove:http://autok3s:8080/v1/addons/rancher self:http://autok3s:8080/v1/addons/rancher update:http://autok3s:8080/v1/addons/rancher] manifest:Ci0tLQphcGlWZXJzaW9uOiB2MQpraW5kOiBOYW1lc3BhY2UKbWV0YWRhdGE6CiAgbmFtZTogY2VydC1tYW5hZ2VyCi0tLQphcGlWZXJzaW9uOiB2MQpraW5kOiBOYW1lc3BhY2UKbWV0YWRhdGE6CiAgbmFtZTogY2F0dGxlLXN5c3RlbQotLS0KYXBpVmVyc2lvbjogaGVsbS5jYXR0bGUuaW8vdjEKa2luZDogSGVsbUNoYXJ0Cm1ldGFkYXRhOgogIG5hbWVzcGFjZToga3ViZS1zeXN0ZW0KICBuYW1lOiBjZXJ0LW1hbmFnZXIKc3BlYzoKICB0YXJnZXROYW1lc3BhY2U6IGNlcnQtbWFuYWdlcgogIHZlcnNpb246IHYxLjExLjAKICBjaGFydDogY2VydC1tYW5hZ2VyCiAgcmVwbzogaHR0cHM6Ly9jaGFydHMuamV0c3RhY2suaW8KICBzZXQ6CiAgICBpbnN0YWxsQ1JEczogInRydWUiCi0tLQphcGlWZXJzaW9uOiBoZWxtLmNhdHRsZS5pby92MQpraW5kOiBIZWxtQ2hhcnQKbWV0YWRhdGE6CiAgbmFtZTogcmFuY2hlcgogIG5hbWVzcGFjZToga3ViZS1zeXN0ZW0Kc3BlYzoKICB0YXJnZXROYW1lc3BhY2U6IGNhdHRsZS1zeXN0ZW0KICByZXBvOiB7eyAucmFuY2hlclJlcG8gfCBkZWZhdWx0ICJodHRwczovL3JlbGVhc2VzLnJhbmNoZXIuY29tL3NlcnZlci1jaGFydHMvbGF0ZXN0IiB9fQogIGNoYXJ0OiByYW5jaGVyCiAgdmVyc2lvbjoge3sgLlZlcnNpb24gfCBkZWZhdWx0ICIiIH19CiAgdmFsdWVzQ29udGVudDogfC0KICAgIGhvc3RuYW1lOiAie3sgcHJvdmlkZXJUZW1wbGF0ZSAicHVibGljLWlwLWFkZHJlc3MiIH19Ont7IC5QdWJsaWNQb3J0IHwgZGVmYXVsdCAzMDQ0MyB9fSIKICAgIGluZ3Jlc3M6CiAgICAgIGVuYWJsZWQ6IGZhbHNlCiAgICBnbG9iYWw6CiAgICAgIGNhdHRsZToKICAgICAgICBwc3A6CiAgICAgICAgICBlbmFibGVkOiBmYWxzZQogICAgYm9vdHN0cmFwUGFzc3dvcmQ6IHt7IC5ib290c3RyYXBQYXNzd29yZCB8IGRlZmF1bHQgIlJhbmNoZXJGb3JGdW4iIH19CiAgICBhbnRpQWZmaW5pdHk6ICJyZXF1aXJlZCIKICAgIHJlcGxpY2FzOiAxCi0tLQphcGlWZXJzaW9uOiB2MQpraW5kOiBTZXJ2aWNlCm1ldGFkYXRhOgogIGxhYmVsczoKICAgIGFwcDogcmFuY2hlcgogIG5hbWU6IHJhbmNoZXItbGItc3ZjCiAgbmFtZXNwYWNlOiBjYXR0bGUtc3lzdGVtCnNwZWM6CiAgcG9ydHM6CiAgICAtIG5hbWU6IGh0dHAKICAgICAgcG9ydDoge3sgLkhUVFBQb3J0IHwgZGVmYXVsdCAzMDA4MCB9fQogICAgICBwcm90b2NvbDogVENQCiAgICAgIHRhcmdldFBvcnQ6IDgwCiAgICAtIG5hbWU6IGh0dHBzCiAgICAgIHBvcnQ6IHt7IC5QdWJsaWNQb3J0IHwgZGVmYXVsdCAzMDQ0MyB9fQogICAgICBwcm90b2NvbDogVENQCiAgICAgIHRhcmdldFBvcnQ6IDQ0MwogIHNlbGVjdG9yOgogICAgYXBwOiByYW5jaGVyCiAgc2Vzc2lvbkFmZmluaXR5OiBOb25lCiAgdHlwZTogTG9hZEJhbGFuY2VyCg== name:rancher type:addon]] links:map[self:http://autok3s:8080/v1/addons] resourceType:addon type:collection]] == len(data.data) == 1
routing end with 1.102108892s
API Average Max Min QPS Count Error
POST http://autok3s:8080/v1/sshKeys 84.490682ms 84.490682ms 84.490682ms 0 1 0
POST http://autok3s:8080/v1/clusters 55.360631ms 163.466559ms 1.007418ms 0 3 0
POST http://autok3s:8080/v1/credentials 10.309855ms 11.049823ms 9.902385ms 0 4 0
GET http://autok3s:8080/v1/sshKeys/zurE 1.977901ms 1.977901ms 1.977901ms 0 1 0
GET http://autok3s:8080/v1/clusters 1.419202ms 1.419202ms 1.419202ms 0 1 0
GET http://autok3s:8080/v1/clusters/k3d-wxVY?link=nodes 1.33113ms 1.33113ms 1.33113ms 0 1 0
DELETE http://autok3s:8080/v1/clusters/k3d-wxVY 1.027705ms 1.027705ms 1.027705ms 0 1 0
GET http://autok3s:8080/v1/sshKeys 938.248µs 938.248µs 938.248µs 0 1 0
GET http://autok3s:8080/v1/addons 841.101µs 841.101µs 841.101µs 0 1 0
consume: 1.102246087s

There are serial issues needed to be addressed:

  • If running pr testing, following error will be thrown as the version is in the wrong format
    panic: version string "pr-LinuxSuRen-647" doesn't match expected regular expression: "^v(\d+\.\d+\.\d+)"
    
    goroutine 1 [running]:
    k8s.io/component-base/metrics.parseVersion({{0x0, 0x0}, {0x0, 0x0}, {0x54e4fc0, 0x11}, {0x54fe280, 0x28}, {0x54df168, 0x5}, ...})
           /go/pkg/mod/k8s.io/[email protected]/metrics/version_parser.go:47 +0x274
    k8s.io/component-base/metrics.newKubeRegistry({{0x0, 0x0}, {0x0, 0x0}, {0x54e4fc0, 0x11}, {0x54fe280, 0x28}, {0x54df168, 0x5}, ...})
           /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:349 +0x119
    k8s.io/component-base/metrics.NewKubeRegistry()
           /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:363 +0x78
    k8s.io/component-base/metrics/legacyregistry.init()
           /go/pkg/mod/k8s.io/[email protected]/metrics/legacyregistry/registry.go:30 +0x1d
    
  • The e2e testing needs a passed/failed result.
  • Resource clean logic should be considered in e2e testing. The autok3s won't start docker daemon so it will probably re-use the host docker to do k3d testing. Some k3d containers will be left over after the test finished.

@LinuxSuRen
Copy link
Author

The following command will return the exit code if there are some errors happen.

atest run -p testcase.yaml

All the test cases passed as show in your log output. But I don't know how the error reproduce:

panic: version string "pr-LinuxSuRen-647" doesn't match expected regular expression: "^v(\d+\.\d+\.\d+)"

The e2e testing needs a passed/failed result.

Currently, we can see the result from the table. And see the each test case if there is an error. And I think it could have a new feature to report a summary. Such as: Total: 10, Error: 2

Resource clean logic should be considered in e2e testing

Currently, we could add the post to do that. Please allow me to do more tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Proposal: Adding e2e testing for this project
4 participants