====== Extending the cluster: Adding monitors ======
^ Documentation ^|
^Name:| Extending the cluster: Adding monitors |
^Description:| How to add more nodes to a running cluster |
^Modification date :|19/07/2019|
^Owner:|dodger|
^Notify changes to:|Owner |
^Tags:|ceph, object storage |
^Scalate to:|The_fucking_bofh|
====== Pre-Requisites ======
* [[linux:ceph:extending_cluster_bootstrap|Bootstrap]] new nodes
====== Variables used in this documentation ======
^ Name ^ Description ^ Sample ^
| ''${THESERVER}'' | Variable used as salt target, it can be a mask of serves (see sample) | export THESERVER="avmlp-osm-00[56]*"
|
| ''${NEWSERVERS}'' | Variable used for ''clonewars'' target and ''ceph-deploy'' | export NEWSERVERS="avmlp-osm-005 avmlp-osm-006"
|
| ''${VMNAMESTART}'' | Variable used to perform a regex in ''salt'' execution, it will match the environment (''avmlp'', ''bvmlb'' ...) | export VMNAMESTART="avmlp"
|
====== Instructions ======
===== Modify ceph.conf =====
You must add the new monitors to ''ceph.conf'':
[mon.avmlp-osm-005]
host = avmlp-osm-005.ciberterminal.net
addr = 10.20.54.55
public_addr = 10.20.54.55:6789
[mon.avmlp-osm-006]
host = avmlp-osm-006.ciberterminal.net
addr = 10.20.54.56
public_addr = 10.20.54.56:6789
And populate to all the cluster:
ceph-deploy --overwrite-conf config push ${ALLSERVERS}
Then restart the monitors (yes, is a **salt** command):
salt "${THESERVER}" service.restart ceph-mon.target
===== Method 1: ceph-deploy =====
As seen [[http://docs.ceph.com/docs/nautilus/rados/deployment/ceph-deploy-mon/#add-a-monitor|here]]:
ceph-deploy mon create ${NEWSERVERS}
If you lead into problems, perform a double check on ''fsid''!!!
[[https://www.spinics.net/lists/ceph-users/msg32177.html|Ref1]], [[https://www.spinics.net/lists/ceph-users/msg49046.html|Ref2]]
===== Method 2: Manual (as official instructions) =====
As seen [[http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-mons/#adding-a-monitor-manual|here]]:
As ''ceph'' user, in the ''admin'' node:
export TMPDIR=~/joining_ceph
mkdir ${TMPDIR}
cd ${TMPDIR}
sudo ceph auth get mon. -o keyfile.txt
sudo ceph mon getmap -o mapfile.bin
for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done
Then in **each new node**:
export TMPDIR=~/joining_ceph
sudo ceph-mon -i $(hostname) --mkfs --monmap ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt
===== Method 3: Fully manual with fsid (the working one) =====
A little modification of the manual documentation, more info [[https://blog.ciberterminal.net/2019/07/23/ceph-troubleshooting-adding-new-monitors-to-cluster/|here]]
As ''ceph'' user, in the ''admin'' node:
export TMPDIR=~/joining_ceph
mkdir ${TMPDIR}
cd ${TMPDIR}
sudo ceph auth get mon. -o keyfile.txt
sudo ceph mon getmap -o mapfile.bin
for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done
Then in **each new node**:
export TMPDIR=~/joining_ceph
sudo ceph-mon -i $(hostname) --mkfs --fsid ${CLUSTER_FSID} ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt