User Tools

Site Tools


linux:high_availability_virtualization_cluster

This is an old revision of the document!


Description

Instructions on how to deploy a high availability KVM cluster based on CentOS 6.x

Instructions

RHEL cluster

  • Deploy Centos (minimal)
  • Enjoy with basic networking.
  • Disable SELINUX
  • Install EPEL meta-pkg:
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
  • Install basic pkgs (for me):
yum -y install vim tmux lsof strace
  • Update:
yum update
  • Enjoy with multipathd+storage 8-)
  • Install the clusterware suite:
yum groupinstall "High Availability" "Resilient Storage"
  • In the “master” node, install the admin suite:
yum groupinstall "High Availability Management"
  • Change the password for “ricci” user (all nodes):
passwd ricci
  • Configure services:
chkconfig ricci on
chkconfig luci on
chkconfig cman on
chkconfig modclusterd on
  • Start services:
service ricci start
service cman start
  • Start luci on admin node:
service luci start

If luci refuses to start, read the troubleshoot section.

  • Disable NetworkManager:
chkconfig NetworkManager off && service NetworkManager stop
  • Setup at least one bridge, for example having 4 interfaces: eth0+eth1=bonding, eth2+eth3=bridge:
    • ifcfg-eth[01]
DEVICE=eth[01]
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
  • ifcfg-eth[23]
DEVICE=eth[23]
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
BRIDGE=br0
  • ifcfg-br0
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
  • ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
NETWORK=10.54.0.0
IPADDR=10.54.0.20
NETMASK=255.255.255.0
BROADCAST=10.52.0.255
GATEWAY=10.54.0.1
  • Enable forwarding on bridges on iptables:
iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT
Or disable iptables (if you have a good firewall):
service iptables stop
chkconfig iptables off
  • Enable forwarding on the kernel:
inet.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
  • Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https://admin_node:8084/) and create the cluster.
  • Define some Fence method for each node.
  • I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under /dev/mapper.
  • Create the partition on the disk:
parted /dev/mapper/SHAREDDISK
(parted) mklabel gpt
(parted) mkpart primary ext2 0 9999999G
(parted) set 1 lvm on
(parted) quit
  • Create the lvm on the new partition:
pvcreate /dev/mapper/SHAREDDISKp1
vgcreate --clustered y SHAREDVG /dev/mapper/SHAREDDISKp1
  • On the rest of the nodes (I didn't needed it, but Its safer to run it):
partprobe; vgscan
  • Create the LVM volume:
lv create -L 9999999G -n SHARED_LV SHAREDVG
  • And create the GFS2:
 mkfs.gfs2 -p lock_dlm -t CLUSTERNAME:SHAREDVOLUME -j 4 /dev/mapper/SHAREDVG-SHARED_LV
  • Add the new filesystem to fstab (MANDATORY) on all the nodes:
# GFS
/dev/mapper/SHAREDVG-SHARED_LV            /mnt/shared_storage         gfs2    noatime         0 0
# GFS
  • Mount it

KVM

  • Install the dependencies:
 yum groupinstall "Virtualization Platform" "Virtualization Tools" Virtualization "Virtualization Client"
  • More deps for virt-manager:
yum install dejavu-lgc-sans-fonts
 

Troubleshoot

Luci

If you get an error like this:

Unable to create the luci base configuration file (`/var/lib/luci/etc/luci.ini').

You can try to:

rpm -e --nodeps python-webob-0.9.6.1-3.el6.noarch
easy_install WebOb==1.0.8

This should be enough, but if you're still having problems running luci, follow with that instructions: Edit that file:

vi /usr/lib/python2.6/site-packages/pylons/decorators/__init__.py

And comment/remove the line (line ~20):

from webob import UnicodeMultiDict

That is.

thanks to dmabry and Michael Pedersen combining their post I were able to run Luci.

linux/high_availability_virtualization_cluster.1523955307.txt.gz · Last modified: 2018/04/17 08:55 by dodger