linux:high_availability_virtualization_cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
high_availability_virtualization_cluster [2013/08/30 08:11] – dodger | high_availability_virtualization_cluster [2013/09/04 13:28] – dodger | ||
---|---|---|---|
Line 4: | Line 4: | ||
====== Instructions ====== | ====== Instructions ====== | ||
+ | ===== RHEL cluster ===== | ||
* Deploy Centos (minimal) | * Deploy Centos (minimal) | ||
* Enjoy with basic networking. | * Enjoy with basic networking. | ||
Line 24: | Line 25: | ||
<code bash>yum groupinstall "High Availability Management" | <code bash>yum groupinstall "High Availability Management" | ||
</ | </ | ||
- | * Change the password for " | + | * Change the password for " |
<code bash> | <code bash> | ||
* Configure services: | * Configure services: | ||
Line 33: | Line 34: | ||
chkconfig modclusterd on | chkconfig modclusterd on | ||
</ | </ | ||
+ | * Start services: | ||
+ | <code bash> | ||
+ | service ricci start | ||
+ | service cman start | ||
+ | </ | ||
+ | * Start luci on admin node: | ||
+ | <code bash> | ||
+ | If luci refuses to start, read the [[high_availability_virtualization_cluster# | ||
* Disable NetworkManager: | * Disable NetworkManager: | ||
<code bash> | <code bash> | ||
Line 76: | Line 85: | ||
GATEWAY=10.54.0.1 | GATEWAY=10.54.0.1 | ||
</ | </ | ||
+ | * Enable forwarding on bridges on iptables: | ||
+ | <code bash> | ||
+ | iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT | ||
+ | </ | ||
+ | Or disable iptables (if you have a good firewall): | ||
+ | <code bash> | ||
+ | service iptables stop | ||
+ | chkconfig iptables off | ||
+ | </ | ||
+ | * Enable forwarding on the kernel: | ||
+ | <code bash> | ||
+ | inet.ipv4.ip_forward = 1 | ||
+ | sysctl -p / | ||
+ | </ | ||
+ | * Access the luci UI or Conga formatively (the url is displayed after staring luci, something like https:// | ||
+ | * Define some Fence method for each node. | ||
+ | * I suppose you've exported some volume from your storage system to all the servers and you've setted up multipathd (or whatever) so you can see the volume under ''/ | ||
+ | * Create the partition on the disk: | ||
+ | <code bash> | ||
+ | parted / | ||
+ | (parted) mklabel gpt | ||
+ | (parted) mkpart primary ext2 0 9999999G | ||
+ | (parted) set 1 lvm on | ||
+ | (parted) quit | ||
+ | </ | ||
+ | * Create the lvm on the new partition: | ||
+ | <code bash> | ||
+ | vgcreate --clustered y SHAREDVG / | ||
+ | </ | ||
+ | * On the rest of the nodes (I didn't needed it, but Its safer to run it): | ||
+ | <code bash> | ||
+ | partprobe; vgscan | ||
+ | </ | ||
+ | * Create the LVM volume: | ||
+ | <code bash>lv create -L 9999999G -n SHARED_LV SHAREDVG | ||
+ | </ | ||
+ | * And create the GFS2: | ||
+ | <code bash> mkfs.gfs2 -p lock_dlm -t CLUSTERNAME: | ||
+ | </ | ||
+ | * Add the new filesystem to fstab (**MANDATORY**) on all the nodes: | ||
+ | <code bash> | ||
+ | # GFS | ||
+ | / | ||
+ | # GFS | ||
+ | </ | ||
+ | * Mount it | ||
- | | + | ===== KVM ===== |
+ | | ||
+ | <code bash> yum groupinstall " | ||
+ | </ | ||
+ | * More deps for virt-manager: | ||
<code bash> | <code bash> | ||
- | service ricci start | + | yum install dejavu-lgc-sans-fonts |
- | service luci start | + | |
- | service cman start | + | |
</ | </ | ||
- | If luci refuses to start, read the [[high_availability_virtualization_cluster# | ||
- | * Access the luci UI or Conga formatively (the url is displayed after staring luci). | ||
- | |||
- | |||
- | chkconfig iptables off && service iptables stop | ||
linux/high_availability_virtualization_cluster.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1