Used docker network in coolify server instance is 10.0.0.0 - collision #4723
pongraczi
started this conversation in
Improvement Requests
Replies: 2 comments
-
Hetzner Coolify app
Hetzner docker app + manual coolify install
|
Beta Was this translation helpful? Give feedback.
0 replies
-
@pongraczi i've no idea why they changed the default-address-pool, i know they needed to change the size, due to being limited to 32 networks. But imo it should have been left as default 172.17.0.0, and instead change to the following: {
"default-address-pools" : [
{
"base" : "172.17.0.0/12",
"size" : 20
},
{
"base" : "192.168.0.0/16",
"size" : 24
}
]
} The other reason is 10.0.x.x is also used for docker swarm mode, as well as default in a lot of private networks like you mention |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
First of all, thanks for this cool project, it is just amazing!
Regarding to the subject, short version:
It would be nice to be able to change the docker used network from 10.0.0.0/24 to something else from the UI or install time.
Longer version:
In my hybrid cloud network (Hetzner VPS + dedicated servers with Proxmox), the network 10.0.0.0/16 is already used.
When I installed a coolify instance into a Hetzner VPS, docker just acquires the 10.0.0.0/24 and 10.0.2.0/24 networks, which blocks reaching other services in the 10.0.0.0/16 network I already have.
The reason of this is that, we should access directly other servers/services which are on the internal network.
As I checked the situation, the installation script has a hardcoded value and it will overwrite my modifications, which is a little bit rude, but obviously effective.
Here is the relevant part of the install script:
So, what I would like to see here is {"base":"10.137.0.0/8","size":24} or something else, for example, if the network settings differs from the 10.0.0.0, the installation should keep this value, instead of just overwriting it without a warning (sure, I did not expect the daemon.json will be overwritten :) )
Notes
I tried to reconfigure the network and according to the forums, (after removing messed containers) running the installation script again can restore (upgrade) them, which is pretty cool. It just did, reconfigured its network, but it overwrote the daemon and I am still on a dead subnet :)
I
Beta Was this translation helpful? Give feedback.
All reactions