linux:ceph:extending_cluster_add_mons
Table of Contents
Extending the cluster: Adding monitors
Documentation | |
---|---|
Name: | Extending the cluster: Adding monitors |
Description: | How to add more nodes to a running cluster |
Modification date : | 19/07/2019 |
Owner: | dodger |
Notify changes to: | Owner |
Tags: | ceph, object storage |
Scalate to: | Thefuckingbofh |
Pre-Requisites
- Bootstrap new nodes
Variables used in this documentation
Name | Description | Sample |
---|---|---|
${THESERVER} | Variable used as salt target, it can be a mask of serves (see sample) | export THESERVER="avmlp-osm-00[56]*" |
${NEWSERVERS} | Variable used for clonewars target and ceph-deploy | export NEWSERVERS="avmlp-osm-005 avmlp-osm-006" |
${VMNAMESTART} | Variable used to perform a regex in salt execution, it will match the environment (avmlp , bvmlb …) | export VMNAMESTART="avmlp" |
Instructions
Modify ceph.conf
You must add the new monitors to ceph.conf
:
[mon.avmlp-osm-005] host = avmlp-osm-005.ciberterminal.net addr = 10.20.54.55 public_addr = 10.20.54.55:6789 [mon.avmlp-osm-006] host = avmlp-osm-006.ciberterminal.net addr = 10.20.54.56 public_addr = 10.20.54.56:6789
And populate to all the cluster:
ceph-deploy --overwrite-conf config push ${ALLSERVERS}
Then restart the monitors (yes, is a salt command):
salt "${THESERVER}" service.restart ceph-mon.target
Method 1: ceph-deploy
As seen here:
ceph-deploy mon create ${NEWSERVERS}
Method 2: Manual (as official instructions)
As seen here:
As ceph
user, in the admin
node:
export TMPDIR=~/joining_ceph mkdir ${TMPDIR} cd ${TMPDIR} sudo ceph auth get mon. -o keyfile.txt sudo ceph mon getmap -o mapfile.bin for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done
Then in each new node:
export TMPDIR=~/joining_ceph sudo ceph-mon -i $(hostname) --mkfs --monmap ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt
Method 3: Fully manual with fsid (the working one)
A little modification of the manual documentation, more info here
As ceph
user, in the admin
node:
export TMPDIR=~/joining_ceph mkdir ${TMPDIR} cd ${TMPDIR} sudo ceph auth get mon. -o keyfile.txt sudo ceph mon getmap -o mapfile.bin for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done
Then in each new node:
export TMPDIR=~/joining_ceph sudo ceph-mon -i $(hostname) --mkfs --fsid ${CLUSTER_FSID} ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt
linux/ceph/extending_cluster_add_mons.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1