Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for AWS multi-AZ #681

Closed
UltraInstinct14 opened this issue May 23, 2024 · 3 comments
Closed

Support for AWS multi-AZ #681

UltraInstinct14 opened this issue May 23, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@UltraInstinct14
Copy link
Contributor

Is your feature request related to a problem? Please describe.

If loxilb runs in two instances with each instance in a different VPC or AZ, currently the same VIP for communication can't be maintained.

Describe the solution you'd like
loxilb instances should be able to run in different VPCs/AZ with the same VIP CIDR

Describe alternatives you've considered
N/A

Additional context

There is high-level AWS design pattern how this could be achieved.

@UltraInstinct14 UltraInstinct14 added the enhancement New feature or request label May 23, 2024
@TrekkieCoder
Copy link
Collaborator

ElasticIP needs to be reassociated to active EC2 instance. For fullNAT mode to work, a private CIDR needs to be associated with loxi instances. The privateCIDR also needs to migrate to active VPC.

TrekkieCoder added a commit that referenced this issue May 27, 2024
TrekkieCoder added a commit that referenced this issue May 27, 2024
TrekkieCoder added a commit that referenced this issue May 27, 2024
@TrekkieCoder TrekkieCoder changed the title Support for AWS multi-VPC/multi-AZ Support for AWS multi-AZ Jun 4, 2024
@backguynn
Copy link
Collaborator

backguynn commented Jun 4, 2024

Overall pattern is as follows -

  • Assign a private subnet to the master loxilb instance. And associate with EIP to make it accessible from outside.
  • When failover occurs, recreate the private subnet and connect it to the new master instance.

loxilb-k8s-arch-Multi-AZ-HA

The following is an example HA configuration. Kindly change the instance's IP and subnet settings as per need.

VPC CIDR: 192.168.0.0/16
loxilb instance1: 192.168.218.87
loxilb instance2: 192.168.228.79
Elastic IP: 15.168.149.225
private subnet: 192.168.248.0/24
private IP associated with EIP: 192.168.248.254

Setting up kube-loxilb

  • Download manifest -
wget https://raw.githubusercontent.com/loxilb-io/kube-loxilb/main/manifest/ext-cluster/kube-loxilb.yaml
  • Change the params as follows -
spec:
  containers:
  - name: kube-loxilb
        image: ghcr.io/loxilb-io/kube-loxilb:aws-support
        imagePullPolicy: Always
        command:
        - /bin/kube-loxilb
  - args:
    - --loxiURL=http://192.168.228.79:11111,http://192.168.218.87:11111
    - --externalCIDR=15.168.149.225/32
    - --privateCIDR=192.168.248.254/32
    - --setRoles=0.0.0.0
    - --setLBMode=2 

In loxiURL, specify loxilb 1 & 2 instance's IP.
In externalCIDR, specify the Elastic IP to use for external access(netmask must be set to 32 currently).
In privateCIDR, specify the private IP to be associated with Elastic IP(netmask must be set to 32 currently).

Setting up loxilb instances

  • loxilb1:
sudo docker run -u root --cap-add SYS_ADMIN \
  --restart unless-stopped \
  --net=host \
  --privileged \
  -dit \
  -v /dev/log:/dev/log \ 
  -e AWS_REGION=ap-northeast-3 \
  --name loxilb \  
  ghcr.io/loxilb-io/loxilb:aws-support --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.228.79 --self=0

In --cloudcidrblock option, specify the CIDR of your private subnet.
In --cluster option, specify IP address of loxilb2 instance
In --self option, set 0 for loxilb1 and 1 for loxilb2.

  • loxilb2:
sudo docker run -u root --cap-add SYS_ADMIN \
  --restart unless-stopped \
  --net=host \
  --privileged \
  -dit \
  -v /dev/log:/dev/log \
  -e AWS_REGION=ap-northeast-3 \
  --name loxilb \
  ghcr.io/loxilb-io/loxilb:aws-support --cloud=aws --cloudcidrblock=192.168.248.0/24 --cluster=192.168.218.87 --self=1

@TrekkieCoder
Copy link
Collaborator

Multi-VPC support is yet to be validated hence currently limited to multi-AZ in same VPC !!

TrekkieCoder added a commit that referenced this issue Jun 12, 2024
TrekkieCoder added a commit that referenced this issue Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants