Replies: 2 comments 2 replies
-
In theory, doable, but in practice, CapRover isn't designed for this. You'll end up having to do so much manual work to a degree that I'd argue CapRover just makes the process slower than providing any help.
|
Beta Was this translation helpful? Give feedback.
-
@githubsaturn Thanks for the response, I have tried an approach using an external load balancer and it was very easy to set up with CapRover, so I thought I'd share: Basic assumptions
SetupFor this test I used DigitalOcean with their managed Load Balancer service, but this approach should work for Cloudflare, AWS Cloudfront or any similar services. First, we set up a Captain node and two child node and connect them in a cluster. We then set up an application on CapRover. We configure a port mapping (in this case 8888). This will make each node IP on port 8888 route to the application. I am not an expert on Swarm but it seems to use the routing mesh because when connecting to a node you will actually get a response from a random node in the swarm. Now we can create a Load Balancer in DigitalOcean. We add our captain + worker nodes to the load balancer on port 8888. We verify that the nodes are healthy. We can now configure SSL termination for the Load Balancer. I have tried disconnecting both worker nodes and the manager node, and it works just fine (broken nodes are taken out of the load balancer when the health checks start to fail, and re-join when they return back from being offline). |
Beta Was this translation helpful? Give feedback.
-
As mentioned in #1029 , when you have configured a swarm cluster in CapRover then the captain node is the single point of failure.
Could one possible way of dealing with this be to add a third party Load Balancer such as the AWS ALB or DigitalOcean Load Balancer. These managed services can distribute traffic amongst several servers running Docker/CapRover and can handle SSL termination.
But there would have to be some way for each node to expose a port with the running application. Could this perhaps be done already using Port Mapping? Do you see any pitfalls to this approach? To run this in production it would be important to be able to limit which IP:s can contact the ports, so that only your load balancer can connect with them.
Another solution (that would be even simpler) is if each node in the swarm could have a copy of the SSL certificate/Nginx config and terminate connections. In such no load balancer is needed, you would simply add all your node IP:s to the DNS record for your domain and each visiting browser would act as a rudimentary load balancer, trying multiple IP:s to reach your service.
Could any of these two options be viable for CapRover?
Beta Was this translation helpful? Give feedback.
All reactions