-
Notifications
You must be signed in to change notification settings - Fork 2
Installing Rocks6.2 cluster with Open vSwitch Roll
For Rocks cluster configuration, there is a requirement that a fronted node has public and private interfaces and each vm-container node has a private interface. Additional interfaces can be configured for additional networks. We recommend you to have a separate interface on each node for OpenFlow data plane. This additional interface can be connected to an OpenFlow switch and bridge the connection to VMs through Open vSwitch.
Install your Rocks 6.2 cluster. Currently, Open vSwitch Roll is only available on Rocks 6.2. For the installation of the Rocks 6.2, see the document of Rocks: http://central6.rocksclusters.org/roll-documentation/base/6.2/
KVM Roll is recommended to be installed to host virtual machines to be connected to ENT.
Login the physical frontend of Rocks6.2 cluster as a root, and follow the following command lines to build the Open vSwitch Roll.
mkdir /export/home/repositories
cd /export/home/repositories
git clone https://github.com/rocksclusters/openvswitch.git
cd openvswitch
./bootstrap.sh
make roll
These commands will create Open vSwitch Roll, openvswitch-2.4.1-0.x86_64.disk1.iso.
Install the Open vSwitch Roll by the following commands. On a frontend:
rocks add roll openvswitch-2.4.1-0.x86_64.disk1.iso
rocks enable roll openvswitch
(cd /export/rocks/install; rocks create distro)
It will take a few minutes to create Rocks repository with the roll added, after the command finishes you can run a test command to verify what will be done for roll installation:
rocks run roll openvswitch > /tmp/add-openvswitch-roll
Examine the resulting file /tmp/add-openvswitch-roll, it contains command that will be executed when adding the roll. If the file contains no errors or warnings, run the following command:
rocks run roll openvswitch | sh
Check if the RPMs are installed properly.
# rpm -qa | grep openvswitch
The output should contain the following
openvswitch-command-plugins-1-4.x86_64
roll-openvswitch-usersguide-2.4.1-0.x86_64
kmod-openvswitch-2.4.1-1.el6.x86_64
openvswitch-2.4.1-1.x86_64
If the RPMs are not installed, run the following:
cd /export/home/repositories/openvswitch/RPMS/x86_64
rpm -ivh *.rpm
If your cluster has vm-container nodes, need to run the following command on the frontend:
rocks run host vm-container "yum clean all; yum install openvswitch kmod-openvswitch openvswitch-command-plugins"
or, alternatively, execute these commands:
rocks run roll openvswitch > /share/apps/add-openswitch
rocks run host vm-container "bash /share/apps/add-openswitch"
To support Open vSwitch Roll, Base Roll needs to be updated.
cd /export/home/repositories
git clone https://github.com/rocksclusters/base.git
cd base
git checkout rocks-6.2-ovs
cd src/rocks-pylib
make rpm
yum update ../../RPMS/noarch/rocks-pylib-6.2-2.noarch.rpm
To add this RPM on vm-containers:
cp ../../RPMS/noarch/rocks-pylib-6.2-2.noarch.rpm /export/rocks/install/contrib/6.2/x86_64/RPMS/
(cd /export/rocks/install; rocks create distro)
rocks run host vm-container "yum clean all; yum update rocks-pylib"
Check what version of rocks-command-kvm is installed. If it is less than 6.2-3 then you will need to update this RPM. Please download from rocks-command-kvm RPM
The installation commands for frontend are:
cp rocks-command-kvm-6.2-3.x86_64.rpm /export/rocks/install/contrib/6.2/x86_64/RPMS/
(cd /export/rocks/install/; rocks create distro)
yum clean all
yum list rocks-command-kvm
yum update rocks-command-kvm
To install on vm-containers:
rocks run host vm-container "yum clean all; yum update rocks-command-kvm"
The above commands will add RPMs and the vm-containers do not need to be reinstalled. But if no VMs are running one can reinstall the vm-container nodes. Repeat the following command for each vm-container node.
rocks set host boot vm-container action=install
rocks run host vm-container reboot
Run the following command on the frontend. The address specified in the command will not be used. But, you should assign an unique IP range.
rocks add network openflow subnet=192.168.0.0 netmask=255.255.255.0
Add a bridge device on frontend node. In the commands below YOUR-HOST
needs to be replaced with your cluster name
tcp:xxx.xxx.xxx.xxx:xxxx
needs to be replaced with your OpenFlow controller address and port (controller address and port must be coming from ENT operators). To find your cluster name execute
hostname -s
rocks add host interface YOUR-HOST br0 subnet=openflow module=ovs-bridge
rocks set host interface options YOUR-HOST br0 options='set-fail-mode $DEVICE secure -- set bridge $DEVICE protocol=OpenFlow10 -- set-controller $DEVICE tcp:xxx.xxx.xxx.xxx:xxxx'
Add bridge devices on the vm-container nodes (repeat for all vm-container nodes):
rocks add host interface vm-container-0-0 br0 subnet=openflow module=ovs-bridge
rocks set host interface options vm-container-0-0 br0 options='set-fail-mode $DEVICE secure -- set bridge $DEVICE protocol=OpenFlow10 -- set-controller $DEVICE tcp:xxx.xxx.xxx.xxx:xxxx'
If the physical network device you want to add the host has not appeared yet on the rocks list host interface
, run the following commands. In the commands below eth2
and eth1
needs to be replaced the network devices you have on your physical host and that are available for a connection (not used yet for private or public network). In the example, eth2
is used on the frontend and eth1
is used for the vm-container nodes.
rocks add host interface YOUR-HOST eth2 subnet=openflow module=ovs-link
rocks set host interface options YOUR-HOST eth2 options=nobridge
rocks add host interface vm-container-0-0 eth1 subnet=openflow module=ovs-link
rocks set host interface options vm-container-0-0 eth1 options=nobridge
rocks add host interface vm-container-0-1 eth1 subnet=openflow module=ovs-link
rocks set host interface options vm-container-0-1 eth1 options=nobridge
...
(repeat for all vm-container nodes)
If the physical network devices have already appeared on the rocks list host interface
, run the following commands.
rocks set host interface module YOUR-HOST eth2 ovs-link
rocks set host interface subnet YOUR-HOST eth2 openflow
rocks set host interface options YOUR-HOST eth2 options=nobridge
rocks set host interface module vm-container-0-0 eth1 ovs-link
rocks set host interface subnet vm-container-0-0 eth1 openflow
rocks set host interface options vm-container-0-0 eth1 options=nobridge
rocks set host interface module vm-container-0-1 eth1 ovs-link
rocks set host interface subnet vm-container-0-1 eth1 openflow
rocks set host interface options vm-container-0-1 eth1 options=nobridge
...
(repeat for all vm-container nodes)
If you also found peth2 and peth1 generated for KVM in the result of ifconfig
command, you need to remove them.
DO NOT remove the interfaces of pethX actually used for KVM. Just remove the corresponding interface of pethX which you would like to add for the Open vSwtich. (In the above example, peth2 on frontend, peth1 on vm-containers)
On frontend: (removing peth2)
rm /etc/sysconfig/network-scripts/ifcfg-peth2
vi /etc/udev/rules.d/70-persistent-net.rules
(Edit 70-persistent-net.rules and remove lines for peth2)
reboot
On vm-contaienrs: (removing peth1)
rm /etc/sysconfig/network-scripts/ifcfg-peth1
vi /etc/udev/rules.d/70-persistent-net.rules
(Edit 70-persistent-net.rules and remove lines for peth1)
reboot
Confirm the network configuration.
rocks report host interface YOUR-HOST
Then, run the sync command.
rocks sync host network YOUR-HOST
Repeat for the vm-containers.
rocks report host interface vm-container-0-0
rocks sync host network vm-container-0-0
rocks report host interface vm-container-0-1
rocks sync host network vm-container-0-1
...
If you would like to your virtual cluster to join the OpenFlow network, run these commands. These commands add another network interface to the VMs. We recommend to leave default public and private network as it is. Please remember you need to shutdown the virtual cluster nodes before running the following commands. Verify that the virtual hosts are down by running this command (the output should list "nostate" for STATUS):
rocks list host vm status=1
Add interfaces to all the virtual cluster nodes. For a virtual frontend (named frontend-0-0-0):
rocks add host interface frontend-0-0-0 ovs subnet=openflow mac=`rocks report vm nextmac`
rocks sync config frontend-0-0-0
For virtual compute nodes (named hosted-vm-0-0-0 and hosted-vm-0-1-0):
rocks add host interface hosted-vm-0-0-0 ovs subnet=openflow mac=`rocks report vm nextmac`
rocks sync config hosted-vm-0-0-0
rocks add host interface hosted-vm-0-1-0 ovs subnet=openflow mac=`rocks report vm nextmac`
rocks sync config hosted-vm-0-1-0
...
repeat for all virtual compute nodes.
Start up virtual frontend (substitute your fronted name):
rocks start host vm frontend-0-0-0
When the virtual cluster is up and running, ssh on the virtual frontend execute the following commands. Substitute network name, subnet, netmask and the host IP with the values for your network. The network name can be your choice, but subnet, netmask and the host IP must be coming from ENT operators. The interface name is available in /etc/udev/rules.d/70-persistent-net.rules
rocks add network vopenflow subnet=192.168.0.0 netmask=255.255.255.0
rocks add host interface localhost eth2 subnet=vopenflow ip=192.168.0.1
rocks sync config
rocks sync host network localhost
Now test that the interface can be pinged:
ifconfig -a
ping 192.168.0.1
This is done for testing only:
ovs-vsctl add-port br0 tap0 -- set Interface tap0 type=internal
ifconfig tap0 192.168.0.2 netmask 255.255.255.0 up
Verify that tap0 device is up:
ifconfig tap0
Verify that can ping interface on the VM:
ping 192.168.0.1
When done with testing can delete tap device:
ovs-vsctl del-port br0 tap0
- Home
- How to participate
- Projects
- People
- Publications
- [Application Documentation] (https://github.com/pragmagrid/pragma_ent/wiki/Application-Development)
- [Infrastructure Documentation] (https://github.com/pragmagrid/pragma_ent/wiki/Infrastructure-Development)