linux:high_availability_virtualization_cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionLast revisionBoth sides next revision | ||
high_availability_virtualization_cluster [2013/08/29 16:56] – dodger | linux:high_availability_virtualization_cluster [2018/04/17 09:08] – dodger | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== [HOWTO] Linux KVM cluster ====== | ||
+ | |||
====== Description ====== | ====== Description ====== | ||
Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x | Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x | ||
Line 4: | Line 6: | ||
====== Instructions ====== | ====== Instructions ====== | ||
+ | ===== RHEL cluster ===== | ||
* Deploy Centos (minimal) | * Deploy Centos (minimal) | ||
- | * Enjoy with the networking | + | * Enjoy with basic networking. |
* Disable SELINUX | * Disable SELINUX | ||
* Install EPEL meta-pkg: | * Install EPEL meta-pkg: | ||
Line 24: | Line 27: | ||
<code bash>yum groupinstall "High Availability Management" | <code bash>yum groupinstall "High Availability Management" | ||
</ | </ | ||
- | * Change the password for " | + | * Change the password for " |
<code bash> | <code bash> | ||
* Configure services: | * Configure services: | ||
Line 32: | Line 35: | ||
chkconfig cman on | chkconfig cman on | ||
chkconfig modclusterd on | chkconfig modclusterd on | ||
- | </ | ||
- | * Disable auxiliar networking scripts (iptables only if you have an external firewall): | ||
- | <code bash> | ||
- | chkconfig NetworkManager off && service NetworkManager stop | ||
- | chkconfig iptables off && service iptables stop | ||
</ | </ | ||
* Start services: | * Start services: | ||
<code bash> | <code bash> | ||
service ricci start | service ricci start | ||
- | service luci start | ||
service cman start | service cman start | ||
</ | </ | ||
+ | * Start luci on admin node: | ||
+ | <code bash> | ||
If luci refuses to start, read the [[high_availability_virtualization_cluster# | If luci refuses to start, read the [[high_availability_virtualization_cluster# | ||
- | * Access the luci UI or Conga formatively (the url is displayed after staring luci). | + | |
+ | <code bash> | ||
+ | chkconfig NetworkManager off && service NetworkManager stop | ||
+ | </ | ||
+ | * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, | ||
+ | * ifcfg-eth[01] | ||
+ | <code bash> | ||
+ | DEVICE=eth[01] | ||
+ | TYPE=Ethernet | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | MASTER=bond0 | ||
+ | SLAVE=yes | ||
+ | </ | ||
+ | * ifcfg-eth[23] | ||
+ | <code bash> | ||
+ | DEVICE=eth[23] | ||
+ | ONBOOT=yes | ||
+ | NM_CONTROLLED=no | ||
+ | BOOTPROTO=none | ||
+ | BRIDGE=br0 | ||
+ | </ | ||
+ | * ifcfg-br0 | ||
+ | <code bash> | ||
+ | DEVICE=br0 | ||
+ | TYPE=Bridge | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=none | ||
+ | USERCTL=no | ||
+ | </ | ||
+ | * ifcfg-bond0 | ||
+ | <code bash> | ||
+ | DEVICE=bond0 | ||
+ | ONBOOT=yes | ||
+ | BOOTPROTO=static | ||
+ | USERCTL=no | ||
+ | NETWORK=10.54.0.0 | ||
+ | IPADDR=10.54.0.20 | ||
+ | NETMASK=255.255.255.0 | ||
+ | BROADCAST=10.52.0.255 | ||
+ | GATEWAY=10.54.0.1 | ||
+ | </ | ||
+ | * Enable forwarding on bridges on iptables: | ||
+ | <code bash> | ||
+ | iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT | ||
+ | </ | ||
+ | Or disable iptables (if you have a good firewall): | ||
+ | <code bash> | ||
+ | service iptables stop | ||
+ | chkconfig iptables off | ||
+ | </ | ||
+ | * Enable forwarding on the kernel: | ||
+ | <code bash> | ||
+ | inet.ipv4.ip_forward = 1 | ||
+ | sysctl -p / | ||
+ | </ | ||
+ | | ||
+ | * Define some Fence method for each node. | ||
+ | * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/ | ||
+ | * Create the partition on the disk: | ||
+ | <code bash> | ||
+ | parted / | ||
+ | (parted) mklabel gpt | ||
+ | (parted) mkpart primary ext2 0 9999999G | ||
+ | (parted) set 1 lvm on | ||
+ | (parted) quit | ||
+ | </ | ||
+ | * Create the lvm on the new partition: | ||
+ | <code bash> | ||
+ | vgcreate --clustered y SHAREDVG / | ||
+ | </ | ||
+ | * On the rest of the nodes (I didn't needed it, but Its safer to run it): | ||
+ | <code bash> | ||
+ | partprobe; vgscan | ||
+ | </ | ||
+ | * Create the LVM volume: | ||
+ | <code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG | ||
+ | </ | ||
+ | * And create the GFS2: | ||
+ | <code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME: | ||
+ | </ | ||
+ | * Add the new filesystem to fstab (**MANDATORY**) on all the nodes: | ||
+ | <code bash> | ||
+ | # GFS | ||
+ | / | ||
+ | # GFS | ||
+ | </ | ||
+ | * Mount it | ||
+ | ===== KVM ===== | ||
+ | * Install the dependencies: | ||
+ | <code bash> yum groupinstall " | ||
+ | </ | ||
+ | * More deps for virt-manager: | ||
+ | <code bash> | ||
+ | yum install dejavu-lgc-sans-fonts | ||
+ | </ | ||
linux/high_availability_virtualization_cluster.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1