linux:high_availability_virtualization_cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
high_availability_virtualization_cluster [2013/08/29 15:19] – dodger | linux:high_availability_virtualization_cluster [2022/02/11 11:36] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== [HOWTO] Linux KVM cluster ====== | ||
+ | |||
====== Description ====== | ====== Description ====== | ||
Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x | Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x | ||
Line 4: | Line 6: | ||
====== Instructions ====== | ====== Instructions ====== | ||
+ | ===== RHEL cluster ===== | ||
* Deploy Centos (minimal) | * Deploy Centos (minimal) | ||
- | * Enjoy with the networking | + | * Enjoy with basic networking. |
* Disable SELINUX | * Disable SELINUX | ||
* Install EPEL meta-pkg: | * Install EPEL meta-pkg: | ||
Line 24: | Line 27: | ||
<code bash>yum groupinstall "High Availability Management" | <code bash>yum groupinstall "High Availability Management" | ||
</ | </ | ||
- | * Change the password for " | + | * Change the password for " |
<code bash> | <code bash> | ||
* Configure services: | * Configure services: | ||
Line 32: | Line 35: | ||
chkconfig cman on | chkconfig cman on | ||
chkconfig modclusterd on | chkconfig modclusterd on | ||
- | chkconfig NetworkManager off && service NetworkManager stop | ||
</ | </ | ||
* Start services: | * Start services: | ||
<code bash> | <code bash> | ||
service ricci start | service ricci start | ||
- | service luci start | ||
service cman start | service cman start | ||
</ | </ | ||
- | If luci refuses to start, read the troubleshoot section. | + | * Start luci on admin node: |
+ | <code bash> | ||
+ | If luci refuses to start, read the [[high_availability_virtualization_cluster# | ||
+ | * Disable NetworkManager: | ||
+ | <code bash> | ||
+ | chkconfig NetworkManager off && service NetworkManager stop | ||
+ | </ | ||
+ | * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, | ||
+ | * ifcfg-eth[01] | ||
+ | <code bash> | ||
+ | DEVICE=eth[01] | ||
+ | TYPE=Ethernet | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | MASTER=bond0 | ||
+ | SLAVE=yes | ||
+ | </ | ||
+ | * ifcfg-eth[23] | ||
+ | <code bash> | ||
+ | DEVICE=eth[23] | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | BRIDGE=br0 | ||
+ | </ | ||
+ | * ifcfg-br0 | ||
+ | <code bash> | ||
+ | DEVICE=br0 | ||
+ | TYPE=Bridge | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=none | ||
+ | USERCTL=no | ||
+ | </ | ||
+ | * ifcfg-bond0 | ||
+ | <code bash> | ||
+ | DEVICE=bond0 | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=static | ||
+ | USERCTL=no | ||
+ | NETWORK=10.54.0.0 | ||
+ | IPADDR=10.54.0.20 | ||
+ | NETMASK=255.255.255.0 | ||
+ | BROADCAST=10.52.0.255 | ||
+ | GATEWAY=10.54.0.1 | ||
+ | </ | ||
+ | * Enable forwarding on bridges on iptables: | ||
+ | <code bash> | ||
+ | iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT | ||
+ | </ | ||
+ | Or disable iptables (if you have a good firewall): | ||
+ | <code bash> | ||
+ | service iptables stop | ||
+ | chkconfig iptables off | ||
+ | </ | ||
+ | * Enable forwarding on the kernel: | ||
+ | <code bash> | ||
+ | inet.ipv4.ip_forward = 1 | ||
+ | sysctl -p / | ||
+ | </ | ||
+ | * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https:// | ||
+ | * Define some Fence method for each node. | ||
+ | * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/ | ||
+ | * Create the partition on the disk: | ||
+ | <code bash> | ||
+ | parted / | ||
+ | (parted) mklabel gpt | ||
+ | (parted) mkpart primary ext2 0 9999999G | ||
+ | (parted) set 1 lvm on | ||
+ | (parted) quit | ||
+ | </ | ||
+ | * Create the lvm on the new partition: | ||
+ | <code bash> | ||
+ | vgcreate --clustered y SHAREDVG / | ||
+ | </ | ||
+ | * On the rest of the nodes (I didn't needed it, but Its safer to run it): | ||
+ | <code bash> | ||
+ | partprobe; vgscan | ||
+ | </ | ||
+ | * Create the LVM volume: | ||
+ | <code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG | ||
+ | </ | ||
+ | * And create the GFS2: | ||
+ | <code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME: | ||
+ | </ | ||
+ | * Add the new filesystem to fstab (**MANDATORY**) on all the nodes: | ||
+ | <code bash> | ||
+ | # GFS | ||
+ | / | ||
+ | # GFS | ||
+ | </ | ||
+ | * Mount it | ||
+ | |||
+ | |||
+ | |||
+ | ===== KVM ===== | ||
+ | * Install the dependencies: | ||
+ | <code bash> yum groupinstall " | ||
+ | </ | ||
+ | * More deps for virt-manager: | ||
+ | <code bash> | ||
+ | yum install dejavu-lgc-sans-fonts | ||
+ | </ | ||
+ | |||
Line 59: | Line 164: | ||
<code bash> | <code bash> | ||
rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch | rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch | ||
- | easy_install | + | easy_install |
</ | </ | ||
+ | **This should be enough**, but if you're still having problems running luci, follow with that instructions: | ||
Edit that file: | Edit that file: | ||
<code bash> | <code bash> | ||
Line 71: | Line 177: | ||
That is. | That is. | ||
- | [[http:// | ||
+ | [[http:// |
linux/high_availability_virtualization_cluster.1377789594.txt.gz · Last modified: 2013/08/29 15:19 by dodger