linux:high_availability_virtualization_cluster
Differences
This shows you the differences between two versions of the page.
Previous revision | |||
— | linux:high_availability_virtualization_cluster [2022/02/11 11:36] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== [HOWTO] Linux KVM cluster ====== | ||
+ | ====== Description ====== | ||
+ | Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x | ||
+ | |||
+ | |||
+ | ====== Instructions ====== | ||
+ | ===== RHEL cluster ===== | ||
+ | * Deploy Centos (minimal) | ||
+ | * Enjoy with basic networking. | ||
+ | * Disable SELINUX | ||
+ | * Install EPEL meta-pkg: | ||
+ | <code bash> | ||
+ | rpm -Uvh http:// | ||
+ | </ | ||
+ | * Install basic pkgs (for me): | ||
+ | <code bash>yum -y install vim tmux lsof strace | ||
+ | </ | ||
+ | * Update: | ||
+ | <code bash>yum update | ||
+ | </ | ||
+ | * Enjoy with multipathd+storage 8-) | ||
+ | * Install the clusterware suite: | ||
+ | <code bash>yum groupinstall "High Availability" | ||
+ | </ | ||
+ | * In the " | ||
+ | <code bash>yum groupinstall "High Availability Management" | ||
+ | </ | ||
+ | * Change the password for " | ||
+ | <code bash> | ||
+ | * Configure services: | ||
+ | <code bash> | ||
+ | chkconfig ricci on | ||
+ | chkconfig luci on | ||
+ | chkconfig cman on | ||
+ | chkconfig modclusterd on | ||
+ | </ | ||
+ | * Start services: | ||
+ | <code bash> | ||
+ | service ricci start | ||
+ | service cman start | ||
+ | </ | ||
+ | * Start luci on admin node: | ||
+ | <code bash> | ||
+ | If luci refuses to start, read the [[high_availability_virtualization_cluster# | ||
+ | * Disable NetworkManager: | ||
+ | <code bash> | ||
+ | chkconfig NetworkManager off && service NetworkManager stop | ||
+ | </ | ||
+ | * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, | ||
+ | * ifcfg-eth[01] | ||
+ | <code bash> | ||
+ | DEVICE=eth[01] | ||
+ | TYPE=Ethernet | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | MASTER=bond0 | ||
+ | SLAVE=yes | ||
+ | </ | ||
+ | * ifcfg-eth[23] | ||
+ | <code bash> | ||
+ | DEVICE=eth[23] | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | BRIDGE=br0 | ||
+ | </ | ||
+ | * ifcfg-br0 | ||
+ | <code bash> | ||
+ | DEVICE=br0 | ||
+ | TYPE=Bridge | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=none | ||
+ | USERCTL=no | ||
+ | </ | ||
+ | * ifcfg-bond0 | ||
+ | <code bash> | ||
+ | DEVICE=bond0 | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=static | ||
+ | USERCTL=no | ||
+ | NETWORK=10.54.0.0 | ||
+ | IPADDR=10.54.0.20 | ||
+ | NETMASK=255.255.255.0 | ||
+ | BROADCAST=10.52.0.255 | ||
+ | GATEWAY=10.54.0.1 | ||
+ | </ | ||
+ | * Enable forwarding on bridges on iptables: | ||
+ | <code bash> | ||
+ | iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT | ||
+ | </ | ||
+ | Or disable iptables (if you have a good firewall): | ||
+ | <code bash> | ||
+ | service iptables stop | ||
+ | chkconfig iptables off | ||
+ | </ | ||
+ | * Enable forwarding on the kernel: | ||
+ | <code bash> | ||
+ | inet.ipv4.ip_forward = 1 | ||
+ | sysctl -p / | ||
+ | </ | ||
+ | * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https:// | ||
+ | * Define some Fence method for each node. | ||
+ | * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/ | ||
+ | * Create the partition on the disk: | ||
+ | <code bash> | ||
+ | parted / | ||
+ | (parted) mklabel gpt | ||
+ | (parted) mkpart primary ext2 0 9999999G | ||
+ | (parted) set 1 lvm on | ||
+ | (parted) quit | ||
+ | </ | ||
+ | * Create the lvm on the new partition: | ||
+ | <code bash> | ||
+ | vgcreate --clustered y SHAREDVG / | ||
+ | </ | ||
+ | * On the rest of the nodes (I didn't needed it, but Its safer to run it): | ||
+ | <code bash> | ||
+ | partprobe; vgscan | ||
+ | </ | ||
+ | * Create the LVM volume: | ||
+ | <code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG | ||
+ | </ | ||
+ | * And create the GFS2: | ||
+ | <code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME: | ||
+ | </ | ||
+ | * Add the new filesystem to fstab (**MANDATORY**) on all the nodes: | ||
+ | <code bash> | ||
+ | # GFS | ||
+ | / | ||
+ | # GFS | ||
+ | </ | ||
+ | * Mount it | ||
+ | |||
+ | |||
+ | |||
+ | ===== KVM ===== | ||
+ | * Install the dependencies: | ||
+ | <code bash> yum groupinstall " | ||
+ | </ | ||
+ | * More deps for virt-manager: | ||
+ | <code bash> | ||
+ | yum install dejavu-lgc-sans-fonts | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | <code bash> | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ====== Troubleshoot ====== | ||
+ | ===== Luci ===== | ||
+ | If you get an error like this: | ||
+ | < | ||
+ | Unable to create the luci base configuration file (`/ | ||
+ | </ | ||
+ | |||
+ | You can try to: | ||
+ | <code bash> | ||
+ | rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch | ||
+ | easy_install WebOb==1.0.8 | ||
+ | </ | ||
+ | **This should be enough**, but if you're still having problems running luci, follow with that instructions: | ||
+ | Edit that file: | ||
+ | <code bash> | ||
+ | vi / | ||
+ | </ | ||
+ | And comment/ | ||
+ | <code bash> | ||
+ | from webob import UnicodeMultiDict | ||
+ | </ | ||
+ | |||
+ | That is. | ||
+ | |||
+ | [[http:// |