You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
使用 Nacos 2.3.2 版本内置数据源集群方式部署,服务上下线仅单节点正常,其余节点下线出现 400 响应,下线失败。
In Nacos 2.3.2 cluster deployment using an embedded data source, service registration and deregistration only function correctly on one node. Other nodes respond with a 400 error and fail to deregister.
Expected behavior
期望通过 Web 页面完成服务上下线维护,期望修复该问题或将接口调用升级为 v2 版本。
Service registration and deregistration should be manageable via the web UI, resolving the issue or upgrading the API calls to v2.
Actually behavior
我部署了 24000、24002、24004 端口以内置数据源集群方式启动,部署后访问 Web 页面仅 24004 节点可以正常上下线服务,其余节点下线出现 400 响应,下线失败。
The cluster is deployed on ports 24000, 24002, and 24004. Only the 24004 node can successfully manage service registration and deregistration via the web UI. The 24000 and 24002 nodes respond with a 400 error when trying to deregister services.
How to Reproduce
下载 Nacos 后更新其 cluster.conf 文件保持三者一致为正确配置,命令行启用三服务 startup.cmd -p embedded。
Download Nacos and update the cluster.conf file to ensure the same configuration for all nodes. Start all three nodes using the command startup.cmd -p embedded.
应用服务内配置启动变量完成服务注册:-Dspring.cloud.nacos.server-addr=172.16.20.214:24000,172.16.20.214:24004,172.16.20.214:24002 -Xms256M -Xmx256M -Dspring.profiles.active=dev
Configure the application service startup parameters to complete service registration: -Dspring.cloud.nacos.server-addr=172.16.20.214:24000,172.16.20.214:24004,172.16.20.214:24002 -Xms256M -Xmx256M -Dspring.profiles.active=dev
启动成功后分别访问三台 Nacos 的 Web 页面,经检查其集群状态正常,访问服务管理-服务列表,其服务注册正常,如图于 24000/24002 节点内服务下线失败。
After successful startup, visit the web UI of all three Nacos nodes. The cluster status appears normal, and services register successfully, but deregistration on the 24000 and 24002 nodes fails.
响应信息(Response Message:): “<!doctype html><html lang="en"><head><title>HTTP Status 400–Bad Request</title><style type="text/css ">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>”
通过 24004 节点下线成功,且 24004 节点操作服务下线会同步下线 24000 与 24002,结果符合预期。
Deregistration via the 24004 node is successful and also triggers deregistration for the 24000 and 24002 nodes, as expected.
响应信息(Response Message:):
ok
Desktop (please complete the following information):
OS: 尝试使用 Centos7.6、Windows 8 均可复现(Both CentOS 7.6 and Windows 8 can reproduce the issue)
通过 Apifox 与 curl 进行接口测试,访问 24000 与 24002 的 v1 修改实例接口时会出现400报错,访问 24004 的 v1 修改实例接口时正常,表现与页面访问一致。访问 24000、24002、24004 的 v2 修改实例接口三节点均正常,Web 页面内使用 v1 接口,v1 接口可能随更新迭代出现本问题。
This issue matches #10345. The v2 instance modification API works fine. Testing via Apifox and curl confirms that the v1 instance modification API for the 24000 and 24002 nodes returns errors, while the 24004 node's v1 API functions correctly, consistent with the web UI behavior. All three nodes work normally with the v2 API. The web UI uses the v1 API, which may have been affected by recent updates.
The text was updated successfully, but these errors were encountered:
Describe the bug
使用 Nacos 2.3.2 版本内置数据源集群方式部署,服务上下线仅单节点正常,其余节点下线出现 400 响应,下线失败。
In Nacos 2.3.2 cluster deployment using an embedded data source, service registration and deregistration only function correctly on one node. Other nodes respond with a 400 error and fail to deregister.
Expected behavior
期望通过 Web 页面完成服务上下线维护,期望修复该问题或将接口调用升级为 v2 版本。
Service registration and deregistration should be manageable via the web UI, resolving the issue or upgrading the API calls to v2.
Actually behavior
我部署了 24000、24002、24004 端口以内置数据源集群方式启动,部署后访问 Web 页面仅 24004 节点可以正常上下线服务,其余节点下线出现 400 响应,下线失败。
The cluster is deployed on ports 24000, 24002, and 24004. Only the 24004 node can successfully manage service registration and deregistration via the web UI. The 24000 and 24002 nodes respond with a 400 error when trying to deregister services.
How to Reproduce
下载 Nacos 后更新其 cluster.conf 文件保持三者一致为正确配置,命令行启用三服务 startup.cmd -p embedded。
Download Nacos and update the cluster.conf file to ensure the same configuration for all nodes. Start all three nodes using the command startup.cmd -p embedded.
应用服务内配置启动变量完成服务注册:-Dspring.cloud.nacos.server-addr=172.16.20.214:24000,172.16.20.214:24004,172.16.20.214:24002 -Xms256M -Xmx256M -Dspring.profiles.active=dev
Configure the application service startup parameters to complete service registration: -Dspring.cloud.nacos.server-addr=172.16.20.214:24000,172.16.20.214:24004,172.16.20.214:24002 -Xms256M -Xmx256M -Dspring.profiles.active=dev
启动成功后分别访问三台 Nacos 的 Web 页面,经检查其集群状态正常,访问服务管理-服务列表,其服务注册正常,如图于 24000/24002 节点内服务下线失败。
After successful startup, visit the web UI of all three Nacos nodes. The cluster status appears normal, and services register successfully, but deregistration on the 24000 and 24002 nodes fails.
响应信息(Response Message:):
“<!doctype html><html lang="en"><head><title>HTTP Status 400–Bad Request</title><style type="text/css ">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 400 – Bad Request</h1></body></html>”
通过 24004 节点下线成功,且 24004 节点操作服务下线会同步下线 24000 与 24002,结果符合预期。
Deregistration via the 24004 node is successful and also triggers deregistration for the 24000 and 24002 nodes, as expected.
响应信息(Response Message:):
ok
Desktop (please complete the following information):
Additional context
与 #10345 同问题。经测试,v2 版本实例修改接口正常。
通过 Apifox 与 curl 进行接口测试,访问 24000 与 24002 的 v1 修改实例接口时会出现400报错,访问 24004 的 v1 修改实例接口时正常,表现与页面访问一致。访问 24000、24002、24004 的 v2 修改实例接口三节点均正常,Web 页面内使用 v1 接口,v1 接口可能随更新迭代出现本问题。
This issue matches #10345. The v2 instance modification API works fine. Testing via Apifox and curl confirms that the v1 instance modification API for the 24000 and 24002 nodes returns errors, while the 24004 node's v1 API functions correctly, consistent with the web UI behavior. All three nodes work normally with the v2 API. The web UI uses the v1 API, which may have been affected by recent updates.
The text was updated successfully, but these errors were encountered: