User Tools

Site Tools


linux:high_availability_virtualization_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
high_availability_virtualization_cluster [2013/08/30 12:33] dodgerlinux:high_availability_virtualization_cluster [2022/02/11 11:36] (current) – external edit 127.0.0.1
Line 1: Line 1:
 +====== [HOWTO] Linux KVM cluster ======
 +
 ====== Description ====== ====== Description ======
 Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x
Line 4: Line 6:
  
 ====== Instructions ====== ====== Instructions ======
 +===== RHEL cluster =====
   * Deploy Centos (minimal)   * Deploy Centos (minimal)
   * Enjoy with basic networking.   * Enjoy with basic networking.
Line 41: Line 44:
 <code bash>service luci start</code> <code bash>service luci start</code>
 If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section. If luci refuses to start, read the [[high_availability_virtualization_cluster#Troubleshoot|troubleshoot]] section.
-  * Access the luci UI or Conga formatively (the url is displayed after staring luci). 
   * Disable NetworkManager:   * Disable NetworkManager:
 <code bash> <code bash>
Line 99: Line 101:
 sysctl -p /etc/sysctl.conf sysctl -p /etc/sysctl.conf
 </code> </code>
 +  * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https://admin_node:8084/) and create the cluster.
 +  * Define some Fence method for each node.
 +  * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/dev/mapper''.
 +  * Create the partition on the disk:
 +<code bash>
 +parted /dev/mapper/SHAREDDISK
 +(parted) mklabel gpt
 +(parted) mkpart primary ext2 0 9999999G
 +(parted) set 1 lvm on
 +(parted) quit
 +</code>
 +  * Create the lvm on the new partition:
 +<code bash>pvcreate /dev/mapper/SHAREDDISKp1
 +vgcreate --clustered y SHAREDVG /dev/mapper/SHAREDDISKp1
 +</code>
 +  * On the rest of the nodes (I didn't needed it, but Its safer to run it):
 +<code bash>
 +partprobe; vgscan
 +</code>
 +  * Create the LVM volume:
 +<code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG
 +</code>
 +  * And create the GFS2:
 +<code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME:SHAREDVOLUME -j 4 /dev/mapper/SHAREDVG-SHARED_LV
 +</code>
 +  * Add the new filesystem to fstab (**MANDATORY**) on all the nodes:
 +<code bash>
 +# GFS
 +/dev/mapper/SHAREDVG-SHARED_LV            /mnt/shared_storage         gfs2    noatime         0 0
 +# GFS
 +</code>
 +  * Mount it
  
  
  
 +===== KVM =====
 +  * Install the dependencies:
 +<code bash> yum groupinstall "Virtualization Platform" "Virtualization Tools" Virtualization "Virtualization Client"
 +</code>
 +  * More deps for virt-manager:
 +<code bash>
 +yum install dejavu-lgc-sans-fonts
 +</code>
  
  
linux/high_availability_virtualization_cluster.1377866004.txt.gz · Last modified: 2013/08/30 12:33 by dodger