Table of Contents
Deploying a ceph cluster
Documentation | |
---|---|
Name: | Deploying a ceph cluster |
Description: | Steps to deploy a ceph cluster |
Modification date : | 21/11/2018 |
Owner: | dodger |
Notify changes to: | Owner |
Tags: | ceph, object storage |
Scalate to: | The_fucking_bofh |
THIS DOCUMENT IS MERELY INFORMATIVE, YOU MUST FOLLOW OFFICIAL DOCUMENTATION!
THIS DOCUMENT HAS OUT-OF-DATE PARTS
JUST DON'T COPY PASTE FROM THIS DOCUMENT UNLESS YOU KNOW WHAT YOU'RE DOING!!!
External documentation
Previous Requirements
Basic knowledge of:
Variables used in this documentation
Name | Description | Sample |
---|---|---|
${THESERVER} | Variable used as salt target, it can be a mask of serves (see sample) | export THESERVER="bvmlb-os*-00*" |
${LISTOFSERVERS} | Variable used as ceph-deploy target | export LISTOFSERVERS="bvmlb-osm-001 bvmlb-osd-001" |
Deploy vm's
We use:
- 3 OSD's: disk servers
- 1 vm as admin
- 1 vm as monitoring
- 1 vm as gateway
For disk servers:
bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -d 40GB -m 20 -O -r 4096 -v 2 -o 2
For adm & monitoring servers:
bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 2048 -v 1 -o 2
For gateway:
bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 4096 -v 2 -o 4
Run salt basic states
- Connect to salt-master
- Run postgresql
sls
salt "${THESERVER}" state.apply salt "${THESERVER}" state.apply nsupdate
Additional steps with salt
Install yum-plugin-priorities
In all the servers:
salt "${THESERVER}" pkg.install yum-plugin-priorities
Install ceph-deploy
In the adm:
salt "${THESERVER}" pkg.install ceph-deploy
Add ceph user
In all the servers:
salt "${THESERVER}" user.add ceph 1002
Check:
salt "${THESERVER}" user.info ceph
Add ceph user to sudoers
In all the servers:
salt "${THESERVER}" file.write /etc/sudoers.d/ceph \ "ceph ALL = (root) NOPASSWD:ALL"
Check:
salt "${THESERVER}" cmd.run 'cat /etc/sudoers.d/ceph' salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph
Generate ssh keys
All the servers:
salt "${THESERVER}" cmd.run \ "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa" \ runas=ceph
Populate ssh keys
Get pub keys, all servers:
salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^b" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt
Populate authorized_keys
in all servers:
salt "${THESERVER}" file.copy /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_oss.txt
Deploy CEPH
ceph actions are run on the adm node, not in salt-master
Software install
ON admin node:
su - ceph mkdir ~/ceph-deploy cd ~/ceph-deploy
export LISTOFSERVERS
variable, for example:
export LISTOFSERVERS="bvmld-osadm-101 bvmld-osm-101 bvmld-osd-101 bvmld-osd-102 bvmld-osd-103"
gateway node is not included on this list!
hammer is the latest version available for RHEL6!!
And run install:
ceph-deploy install ${LISTOFSERVERS} --repo-url https://download.ceph.com/rpm-hammer/el7/
Nautilus is the latest for RHEL7:
ceph-deploy install ${LISTOFSERVERS} --repo-url https://download.ceph.com/rpm-nautilus/el7/
Wait for ceph-deploy
to finish its jobs (will take some time).
Deploy monitoring node
Export THESERVER
var:
########################## # monitoring server ########################## THESERVER="bvmlb-osm-001"
Deploy it:
ceph-deploy new ${THESERVER} ceph-deploy mon create-initial ceph-deploy gatherkeys ${THESERVER}
Enable messenger v2 protocol:
sudo ceph mon enable-msgr2
Deploy manager node
Only for luminous+ version (version>12.x)
Export THESERVER
var:
########################## # monitoring server ########################## LISTOFSERVERS="bvmlb-osm-001 bvmlb-osm-002"
Deploy it:
for THESERVER in ${LISTOFSERVERS} ; do ceph-deploy mgr create ${THESERVER} done
Dashboard plugin (for manager)
Enable PG autoscale (plugin for manager)
Deploy disk nodes
export LISTOFSERVERS
variable, for example:
export LISTOFSERVERS="bvmlb-osd-001 bvmlb-osd-002 bvmlb-osd-003"
Check disks:
########################## # STORAGE SERVERS ########################## #check for THESERVER in ${LISTOFSERVERS} ; do echo "${THESERVER}" ceph-deploy disk list "${THESERVER}" done
new versions
For old versions (version>12.x): Create ceph filesystems:
# deploy storage nodes for THESERVER in ${LISTOFSERVERS} ; do echo "${THESERVER}"; echo "################### ${THESERVER}: creating disk" ; ceph-deploy osd create ${THESERVER} --data /dev/sdb; echo "################### ${THESERVER}: listing (check) disk" ; ceph-deploy osd list ${THESERVER}; done
Old versions
For old versions (version<12.x): Create ceph filesystems:
# deploy storage nodes for THESERVER in ${LISTOFSERVERS} ; do echo "${THESERVER}" ceph-deploy disk zap ${THESERVER} /dev/sdb ceph-deploy osd prepare ${THESERVER} /dev/sdb ceph-deploy osd activate ${THESERVER} /dev/sdb1 ceph-deploy disk list ${THESERVER} echo "press enter to continue" read done
[OPTIONAL] Deploy admin on all nodes (except gateway)
export LISTOFSERVERS
variable, for example:
export LISTOFSERVERS="bvmld-osadm-101 bvmld-osm-101 bvmld-osd-101 bvmld-osd-102 bvmld-osd-103"
And deploy it:
########################## # Deploy admin on all the nodes ceph-deploy admin ${LISTOFSERVERS}
Check keyring (this is a salt command if you don't noticed it):
salt "${THESERVER}" file.check_perms /etc/ceph/ceph.client.admin.keyring '{}' root root 644
Deploy gateway
Doc:
Deploy rados from ceph adm:
THESERVER="bvmlb-osgw-001" ceph-deploy install --rgw ${THESERVER} --repo-url https://download.ceph.com/rpm-hammer/el7/ ceph-deploy admin ${THESERVER} ssh ${THESERVER} "sudo yum -y install ceph-radosgw" ceph-deploy rgw create ${THESERVER}
Make 80 as the default port:
cat >>ceph.conf<<EOF [client.rgw.${THESERVER}] rgw_frontends = "civetweb port=80" rgw_thread_pool_size = 100 EOF ceph-deploy --overwrite-conf config push ${THESERVER}
Restart radosgw:
ssh ${THESERVER} "sudo systemctl restart ceph-radosgw" ssh ${THESERVER} "sudo systemctl status ceph-radosgw"
check (gw node):
ssh ${THESERVER} "sudo radosgw-admin zone get"
Sample:
ceph@bvmlb-osadm-001 ~/ceph-deploy $ ssh ${THESERVER} "sudo radosgw-admin zone get" { "domain_root": ".rgw", "control_pool": ".rgw.control", "gc_pool": ".rgw.gc", "log_pool": ".log", "intent_log_pool": ".intent-log", "usage_log_pool": ".usage", "user_keys_pool": ".users", "user_email_pool": ".users.email", "user_swift_pool": ".users.swift", "user_uid_pool": ".users.uid", "system_key": { "access_key": "", "secret_key": "" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".rgw.buckets.index", "data_pool": ".rgw.buckets", "data_extra_pool": ".rgw.buckets.extra" } } ] }
Deploying MDS
for i in 1 3 ; do bash CloneWars.sh -F -c nuciberterminal -i 10.20.55.1${i} -v 4 -o 2 -r 4096 -O -m 20 -h AVMLP-OSFS-00${i} ; done for i in 2 4 ; do bash CloneWars.sh -F -c nuciberterminal2 -i 10.20.55.1${i} -v 4 -o 2 -r 4096 -O -m 20 -h AVMLP-OSFS-00${i} ; done
export THESERVER="avmlp-osfs*.ciberterminal.net" salt "${THESERVER}" state.apply salt "${THESERVER}" state.apply nsupdate salt "${THESERVER}" pkg.install yum-plugin-priorities salt "${THESERVER}" user.add ceph 1002 salt "${THESERVER}" file.write /etc/sudoers.d/ceph "ceph ALL = (root) NOPASSWD:ALL" salt "${THESERVER}" cmd.run "cat /etc/sudoers.d/ceph" salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph salt "${THESERVER}" cmd.run "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa" runas=ceph salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^a" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt export THESERVER="avmlp-os*.ciberterminal.net" salt "${THESERVER}" file.copy /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_avmlp-os.txt
From osm-001:
for i in ${MDSSERVERS} ; do scp ceph.repo ${i}:/home/ceph/ ; ssh ${i} "sudo mv /home/ceph/ceph.repo /etc/yum.repos.d/" ; done export MDSSERVERS="avmlp-osfs-002.ciberterminal.net avmlp-osfs-001.ciberterminal.net avmlp-osfs-004.ciberterminal.net avmlp-osfs-003.ciberterminal.net" export LISTOFSERVERS=${MDSSERVERS} ceph-deploy install ${LISTOFSERVERS} ceph-deploy mds create ${LISTOFSERVERS}
mds information will appear after the creation of the cephfs
export POOL_NAME="cephfs_data-ftp" ceph osd pool create ${POOL_NAME} 128 128 replicated ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule ceph osd pool set ${POOL_NAME} compression_algorithm snappy ceph osd pool set ${POOL_NAME} compression_mode aggressive ceph osd pool set ${POOL_NAME} compression_min_blob_size 10240 ceph osd pool set ${POOL_NAME} compression_max_blob_size 4194304 ceph osd pool set ${POOL_NAME} pg_autoscale_mode on export POOL_NAME="cephfs_metadata-ftp" ceph osd pool create ${POOL_NAME} 128 128 replicated ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule ceph osd pool set ${POOL_NAME} pg_autoscale_mode on ceph fs new cephfs cephfs_metadata-ftp cephfs_data-ftp ceph fs ls ceph -s ceph mds stat