Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]把一个EIP解邦后再邦定给另外一个VPC 的虚拟机, 会出现ping 这个EIP 时断的情况 。 #20191

Open
khw934 opened this issue May 1, 2024 · 2 comments
Assignees
Labels
bug Something isn't working state/awaiting processing

Comments

@khw934
Copy link

khw934 commented May 1, 2024

问题描述/What happened:
把一个EIP解邦后再邦定给另外一个VPC 的虚拟机, 会出现ping 这个EIP 时断的情况 ,通过抓包分析发现解绑或者删除EIP 存在清理不全的情况
环境/Environment:
EIP 网关是独立部署的, 如果EIP 不存在重复使用的情况下, 网络是正常的。

  • OS (e.g. cat /etc/os-release):

[root@ecm-232e-0001 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

[root@ecm-232e-0001 ~]#

  • Kernel (e.g. uname -a):

[root@ecm-232e-0001 ~]# uname -a
Linux ecm-232e-0001 5.4.130-1.yn20230805.el7.x86_64 #1 SMP Wed Oct 11 03:26:01 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
[root@ecm-232e-0001 ~]#
[root@ecm-232e-0001 ~]#

  • Host: (e.g. dmidecode | egrep -i 'manufacturer|product' |sort -u)

[root@ecm-232e-0001 ~]# dmidecode | egrep -i 'manufacturer|product' |sort -u
Manufacturer: GoStack Foundation
Manufacturer: Red Hat
Product Name: Gostack Agent
[root@ecm-232e-0001 ~]#

  • Service Version (e.g. kubectl exec -n onecloud $(kubectl get pods -n onecloud | grep climc | awk '{print $1}') -- climc version-list):

[root@ecm-232e-0001 ~]# climc version-list
Get "https://192.168.1.3:30898/version": dial tcp 192.168.1.3:30898: connect: connection refused
Get "https://192.168.1.3:30893/version": dial tcp 192.168.1.3:30893: connect: connection refused
Get "https://192.168.1.3:30892/version": dial tcp 192.168.1.3:30892: connect: connection refused
Get "https://192.168.1.3:30443/version": dial tcp 192.168.1.3:30443: connect: connection refused
+---------------+--------------------------------------------+
| Field | Value |
+---------------+--------------------------------------------+
| ansible | release/3.11(3ffea07d3124042402) |
| apimap | release/3.11(3ffea07d3124042402) |
| cloudmon | release/3.11(3ffea07d3124042402) |
| cloudproxy | release/3.11(3ffea07d3124042402) |
| compute_v2 | release/3.11(3ffea07d3124042402) |
| devtool | release/3.11(3ffea07d3124042402) |
| identity | release/3.11(3ffea07d3124042402) |
| image | release/3.11(3ffea07d3124042402) |
| k8s | heads/v3.11.3-20240422.2(e6c3e48724042402) |
| log | release/3.11(3ffea07d3124042402) |
| monitor | release/3.11(3ffea07d3124042402) |
| notify | release/3.11(3ffea07d3124042402) |
| scheduledtask | release/3.11(3ffea07d3124042402) |
| scheduler | release/3.11(3ffea07d3124042402) |
| vpcagent | release/3.11(3ffea07d3124042402) |
| webconsole | release/3.11(3ffea07d3124042402) |
| yunionconf | release/3.11(3ffea07d3124042402) |
+---------------+--------------------------------------------+
[root@ecm-232e-0001 ~]#


21.zip
下是通过抓包分析的情况

image

这是从 EIP 网关抓的包

只要重新启动 EIP 网关 yunion-sdnagent-eipgw , 就正常。

@khw934 khw934 added the bug Something isn't working label May 1, 2024
@khw934
Copy link
Author

khw934 commented May 3, 2024

我把这个问题复现了
删除 虚拟机的时候, 如不不选择删除EIP , 后面再创建虚拟机, 选择之前的EIP,就有问题了
image

这是 在 EIP 网关查询出来的 flows 记录 , 对应的vpc 地址是错误,

image

image

重新启动 systemctl restart yunion-sdnagent-eipgw ,flows 记录 就对了

image

@swordqiu swordqiu self-assigned this May 6, 2024
@khw934
Copy link
Author

khw934 commented May 7, 2024

这有问题?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working state/awaiting processing
Projects
None yet
Development

No branches or pull requests

2 participants