User Tools

Site Tools


linux:high_availability_virtualization_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
high_availability_virtualization_cluster [2013/08/30 08:09] dodgerlinux:high_availability_virtualization_cluster [2018/04/17 09:08] dodger
Line 1: Line 1:
 +====== [HOWTO] Linux KVM cluster ======
 +
 ====== Description ====== ====== Description ======
 Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x
Line 4: Line 6:
  
 ====== Instructions ====== ====== Instructions ======
 +===== RHEL cluster =====
   * Deploy Centos (minimal)   * Deploy Centos (minimal)
   * Enjoy with basic networking.   * Enjoy with basic networking.
Line 24: Line 27:
 <code bash>yum groupinstall "High Availability Management" <code bash>yum groupinstall "High Availability Management"
 </code> </code>
-  * Change the password for "ricci" user:+  * Change the password for "ricci" user (all nodes):
 <code bash>passwd ricci</code> <code bash>passwd ricci</code>
   * Configure services:   * Configure services:
Line 33: Line 36:
 chkconfig modclusterd on chkconfig modclusterd on
 </code> </code>
 +  * Start services:
 +<code bash>
 +service ricci start
 +service cman start
 +</code>
 +  * Start luci on admin node:
 +<code bash>service luci start</code>
 +If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section.
   * Disable NetworkManager:   * Disable NetworkManager:
 <code bash> <code bash>
Line 38: Line 49:
 </code> </code>
   * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, eth2+eth3=bridge:   * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, eth2+eth3=bridge:
 +    * ifcfg-eth[01]
 <code bash> <code bash>
-cat ifcfg-eth0 +DEVICE=eth[01]
-</code> +
-<code bash> +
-DEVICE=eth0+
 TYPE=Ethernet TYPE=Ethernet
 ONBOOT=yes ONBOOT=yes
Line 50: Line 59:
 SLAVE=yes SLAVE=yes
 </code> </code>
 +    * ifcfg-eth[23]
 <code bash> <code bash>
-cat ifcfg-eth1 +DEVICE=eth[23]
-</code> +
-<code bash> +
-DEVICE=eth1 +
-TYPE=Ethernet +
-ONBOOT=yes +
-NM_CONTROLLED=no +
-BOOTPROTO=none +
-MASTER=bond0 +
-SLAVE=yes +
-</code> +
-<code bash> +
-cat ifcfg-eth2 +
-</code> +
-<code bash> +
-DEVICE=eth2+
 ONBOOT=yes ONBOOT=yes
 NM_CONTROLLED=no NM_CONTROLLED=no
Line 72: Line 67:
 BRIDGE=br0 BRIDGE=br0
 </code> </code>
-<code bash> +    * ifcfg-br0
-cat ifcfg-eth3 +
-</code> +
-<code bash> +
-DEVICE=eth3 +
-ONBOOT=yes +
-NM_CONTROLLED=no +
-BOOTPROTO=none +
-BRIDGE=br0 +
-</code> +
- +
-<code bash> +
-cat ifcfg-br0 +
-</code>+
 <code bash> <code bash>
 DEVICE=br0 DEVICE=br0
Line 93: Line 75:
 USERCTL=no USERCTL=no
 </code> </code>
-<code bash> +    * ifcfg-bond0
-cat ifcfg-bond0 +
-</code>+
 <code bash> <code bash>
 DEVICE=bond0 DEVICE=bond0
Line 107: Line 87:
 GATEWAY=10.54.0.1 GATEWAY=10.54.0.1
 </code> </code>
 +  * Enable forwarding on bridges on iptables:
 +<code bash>
 +iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT
 +</code>
 +  Or disable iptables (if you have a good firewall):
 +<code bash>
 +service iptables stop
 +chkconfig iptables off
 +</code>
 +  * Enable forwarding on the kernel:
 +<code bash>
 +inet.ipv4.ip_forward = 1
 +sysctl -p /etc/sysctl.conf
 +</code>
 +  * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https://admin_node:8084/) and create the cluster.
 +  * Define some Fence method for each node.
 +  * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/dev/mapper''.
 +  * Create the partition on the disk:
 +<code bash>
 +parted /dev/mapper/SHAREDDISK
 +(parted) mklabel gpt
 +(parted) mkpart primary ext2 0 9999999G
 +(parted) set 1 lvm on
 +(parted) quit
 +</code>
 +  * Create the lvm on the new partition:
 +<code bash>pvcreate /dev/mapper/SHAREDDISKp1
 +vgcreate --clustered y SHAREDVG /dev/mapper/SHAREDDISKp1
 +</code>
 +  * On the rest of the nodes (I didn't needed it, but Its safer to run it):
 +<code bash>
 +partprobe; vgscan
 +</code>
 +  * Create the LVM volume:
 +<code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG
 +</code>
 +  * And create the GFS2:
 +<code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME:SHAREDVOLUME -j 4 /dev/mapper/SHAREDVG-SHARED_LV
 +</code>
 +  * Add the new filesystem to fstab (**MANDATORY**) on all the nodes:
 +<code bash>
 +# GFS
 +/dev/mapper/SHAREDVG-SHARED_LV            /mnt/shared_storage         gfs2    noatime         0 0
 +# GFS
 +</code>
 +  * Mount it
  
  
  
-  Start services:+===== KVM ===== 
 +  Install the dependencies: 
 +<code bash> yum groupinstall "Virtualization Platform" "Virtualization Tools" Virtualization "Virtualization Client" 
 +</code> 
 +  * More deps for virt-manager:
 <code bash> <code bash>
-service ricci start +yum install dejavu-lgc-sans-fonts
-service luci start +
-service cman start+
 </code> </code>
-If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section. 
-  * Access the luci UI or Conga formatively (the url is displayed after staring luci). 
- 
- 
-chkconfig iptables off && service iptables stop 
  
  
linux/high_availability_virtualization_cluster.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1