User Tools

Site Tools


ceph:extending_cluster_add_mons

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
ceph:extending_cluster_add_mons [2019/07/23 09:59] dodgerceph:extending_cluster_add_mons [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1
Line 1: Line 1:
-====== Extending the cluster: Adding monitors ====== 
  
-^  Documentation  ^| 
-^Name:| Extending the cluster: Adding monitors | 
-^Description:| How to add more nodes to a running cluster | 
-^Modification date :|19/07/2019| 
-^Owner:|dodger@ciberterminal.net| 
-^Notify changes to:|Owner | 
-^Tags:|ceph, object storage | 
- 
- 
-====== Pre-Requisites ====== 
- 
-  * [[documentation:linux:ceph:extending_cluster_bootstrap|Bootstrap]] new nodes 
- 
-====== Variables used in this documentation ====== 
- 
-^ Name ^ Description ^ Sample ^ 
-| ''${THESERVER}'' | Variable used as salt target, it can be a mask of serves (see sample) | <code bash>export THESERVER="pro-cephm-00[56]*"</code> 
-| ''${NEWSERVERS}'' | Variable used for ''clonewars'' target and ''ceph-deploy'' | <code bash>export NEWSERVERS="pro-cephm-005 pro-cephm-006"</code> 
-| ''${VMNAMESTART}'' | Variable used to perform a regex in ''salt'' execution, it will match the environment (''pro'', ''demoenv'' ...) | <code bash>export VMNAMESTART="pro"</code> 
- 
- 
-====== Instructions ====== 
- 
- 
-===== Modify ceph.conf ===== 
-You must add the new monitors to ''ceph.conf'': 
-<code bash> 
-[mon.pro-cephm-005] 
-host = pro-cephm-005.ciberterminal.net 
-addr = 10.20.54.55 
-public_addr = 10.20.54.55:6789 
- 
-[mon.pro-cephm-006] 
-host = pro-cephm-006.ciberterminal.net 
-addr = 10.20.54.56 
-public_addr = 10.20.54.56:6789 
-</code> 
- 
-And populate to all the cluster: 
-<code bash> 
-ceph-deploy --overwrite-conf config push ${ALLSERVERS} 
-</code> 
- 
-Then restart the monitors (yes, is a **salt** command): 
-<code bash> 
-salt "${THESERVER}" service.restart ceph-mon.target 
-</code> 
- 
- 
- 
- 
-===== Method 1: ceph-deploy ===== 
-As seen [[http://docs.ceph.com/docs/nautilus/rados/deployment/ceph-deploy-mon/#add-a-monitor|here]]: 
- 
-<code bash> 
-ceph-deploy mon create ${NEWSERVERS} 
-</code> 
- 
-===== Method 2: Manual (as official instructions) ===== 
-As seen [[http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-mons/#adding-a-monitor-manual|here]]: 
- 
- 
-As ''ceph'' user, in the ''admin'' node: 
-<code bash> 
-export TMPDIR=~/joining_ceph 
-mkdir ${TMPDIR} 
-cd  ${TMPDIR} 
-sudo ceph auth get mon. -o keyfile.txt 
-sudo ceph mon getmap -o mapfile.bin 
-for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done 
-</code> 
- 
-Then in **each new node**: 
- 
-<code bash> 
-export TMPDIR=~/joining_ceph 
-sudo ceph-mon -i $(hostname) --mkfs --monmap ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt 
-</code> 
- 
- 
- 
-===== Method 3: Fully manual with fsid (the working one) ===== 
- 
-A little modification of the manual documentation, more info [[https://blog.ciberterminal.net/2019/07/23/ceph-troubleshooting-adding-new-monitors-to-cluster/|here]] 
- 
-As ''ceph'' user, in the ''admin'' node: 
-<code bash> 
-export TMPDIR=~/joining_ceph 
-mkdir ${TMPDIR} 
-cd  ${TMPDIR} 
-sudo ceph auth get mon. -o keyfile.txt 
-sudo ceph mon getmap -o mapfile.bin 
-for i in ${NEWSERVERS} ; do scp -r "${TMPDIR}" ${i}:${TMPDIR} ; done 
-</code> 
- 
-Then in **each new node**: 
- 
-<code bash> 
-export TMPDIR=~/joining_ceph 
-sudo ceph-mon -i $(hostname) --mkfs --monmap ${TMPDIR}/mapfile.bin --keyring ${TMPDIR}/keyfile.txt 
-</code> 
ceph/extending_cluster_add_mons.1563875988.txt.gz · Last modified: 2019/07/23 09:59 by dodger