User Tools

Site Tools


linux:high_availability_virtualization_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
high_availability_virtualization_cluster [2013/08/29 15:29] dodgerlinux:high_availability_virtualization_cluster [2018/04/17 09:08] dodger
Line 1: Line 1:
 +====== [HOWTO] Linux KVM cluster ======
 +
 ====== Description ====== ====== Description ======
 Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x
Line 4: Line 6:
  
 ====== Instructions ====== ====== Instructions ======
 +===== RHEL cluster =====
   * Deploy Centos (minimal)   * Deploy Centos (minimal)
-  * Enjoy with the networking scripts.+  * Enjoy with basic networking.
   * Disable SELINUX   * Disable SELINUX
   * Install EPEL meta-pkg:   * Install EPEL meta-pkg:
Line 24: Line 27:
 <code bash>yum groupinstall "High Availability Management" <code bash>yum groupinstall "High Availability Management"
 </code> </code>
-  * Change the password for "ricci" user:+  * Change the password for "ricci" user (all nodes):
 <code bash>passwd ricci</code> <code bash>passwd ricci</code>
   * Configure services:   * Configure services:
Line 32: Line 35:
 chkconfig cman on chkconfig cman on
 chkconfig modclusterd on chkconfig modclusterd on
-</code> 
-  * Disable auxiliar networking scripts (iptables only if you have an external firewall): 
-<code bash> 
-chkconfig NetworkManager off && service NetworkManager stop 
-chkconfig iptables off && service iptables stop 
 </code> </code>
   * Start services:   * Start services:
 <code bash> <code bash>
 service ricci start service ricci start
-service luci start 
 service cman start service cman start
 </code> </code>
 +  * Start luci on admin node:
 +<code bash>service luci start</code>
 If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section. If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section.
-  * Access the luci UI or Conga formatively (the url is displayed after staring luci). +  * Disable NetworkManager: 
-  * +<code bash> 
 +chkconfig NetworkManager off && service NetworkManager stop 
 +</code> 
 +  * Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, eth2+eth3=bridge: 
 +    * ifcfg-eth[01] 
 +<code bash> 
 +DEVICE=eth[01] 
 +TYPE=Ethernet 
 +ONBOOT=yes 
 +NM_CONTROLLED=no 
 +BOOTPROTO=none 
 +MASTER=bond0 
 +SLAVE=yes 
 +</code> 
 +    * ifcfg-eth[23] 
 +<code bash> 
 +DEVICE=eth[23] 
 +ONBOOT=yes 
 +NM_CONTROLLED=no 
 +BOOTPROTO=none 
 +BRIDGE=br0 
 +</code> 
 +    * ifcfg-br0 
 +<code bash> 
 +DEVICE=br0 
 +TYPE=Bridge 
 +ONBOOT=yes 
 +BOOTPROTO=none 
 +USERCTL=no 
 +</code> 
 +    * ifcfg-bond0 
 +<code bash> 
 +DEVICE=bond0 
 +ONBOOT=yes 
 +BOOTPROTO=static 
 +USERCTL=no 
 +NETWORK=10.54.0.0 
 +IPADDR=10.54.0.20 
 +NETMASK=255.255.255.0 
 +BROADCAST=10.52.0.255 
 +GATEWAY=10.54.0.1 
 +</code> 
 +  * Enable forwarding on bridges on iptables: 
 +<code bash> 
 +iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT 
 +</code> 
 +  Or disable iptables (if you have a good firewall): 
 +<code bash> 
 +service iptables stop 
 +chkconfig iptables off 
 +</code> 
 +  * Enable forwarding on the kernel: 
 +<code bash> 
 +inet.ipv4.ip_forward = 1 
 +sysctl -p /etc/sysctl.conf 
 +</code> 
 +  * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https://admin_node:8084/and create the cluster
 +  * Define some Fence method for each node. 
 +  * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/dev/mapper''
 +  * Create the partition on the disk: 
 +<code bash> 
 +parted /dev/mapper/SHAREDDISK 
 +(parted) mklabel gpt 
 +(parted) mkpart primary ext2 0 9999999G 
 +(parted) set 1 lvm on 
 +(parted) quit 
 +</code> 
 +  * Create the lvm on the new partition: 
 +<code bash>pvcreate /dev/mapper/SHAREDDISKp1 
 +vgcreate --clustered y SHAREDVG /dev/mapper/SHAREDDISKp1 
 +</code> 
 +  * On the rest of the nodes (I didn't needed it, but Its safer to run it): 
 +<code bash> 
 +partprobe; vgscan 
 +</code> 
 +  * Create the LVM volume: 
 +<code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG 
 +</code> 
 +  * And create the GFS2: 
 +<code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME:SHAREDVOLUME -j 4 /dev/mapper/SHAREDVG-SHARED_LV 
 +</code> 
 +  * Add the new filesystem to fstab (**MANDATORY**) on all the nodes: 
 +<code bash> 
 +# GFS 
 +/dev/mapper/SHAREDVG-SHARED_LV            /mnt/shared_storage         gfs2    noatime         0 0 
 +# GFS 
 +</code> 
 +  * Mount it 
 + 
 + 
 + 
 +===== KVM ===== 
 +  * Install the dependencies: 
 +<code bash> yum groupinstall "Virtualization Platform" "Virtualization Tools" Virtualization "Virtualization Client" 
 +</code> 
 +  * More deps for virt-manager: 
 +<code bash> 
 +yum install dejavu-lgc-sans-fonts 
 +</code> 
 + 
 + 
  
 <code bash> <code bash>
Line 64: Line 164:
 <code bash> <code bash>
 rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch
-easy_install webob+easy_install WebOb==1.0.8
 </code> </code>
 +**This should be enough**, but if you're still having problems running luci, follow with that instructions:
 Edit that file: Edit that file:
 <code bash> <code bash>
Line 76: Line 177:
  
 That is. That is.
-[[http://scientificlinuxforum.org/index.php?s=e8f367a66e9529a1e2acd0b4b1a765f7&amp;showtopic=939&view=findpost&p=7214|thanks to dmabry]] 
  
 +[[http://scientificlinuxforum.org/index.php?s=e8f367a66e9529a1e2acd0b4b1a765f7&amp;showtopic=939&view=findpost&p=7214|thanks to dmabry]] and [[https://groups.google.com/d/msg/turbogears/PtcScbOX-C0/G1WwKnIn04MJ|Michael Pedersen]] combining their post I were able to run Luci.
linux/high_availability_virtualization_cluster.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1