ceph:deploy_ceph_cluster_for_humans
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |||
ceph:deploy_ceph_cluster_for_humans [2019/07/23 07:48] – dodger | ceph:deploy_ceph_cluster_for_humans [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Deploying a ceph cluster ====== | ||
- | ^ Documentation | ||
- | ^Name:| Deploying a ceph cluster | | ||
- | ^Description: | ||
- | ^Modification date : | ||
- | ^Owner: | ||
- | ^Notify changes to:|Owner | | ||
- | ^Tags: | ||
- | |||
- | |||
- | ====== External documentation | ||
- | |||
- | * [[https:// | ||
- | * https:// | ||
- | * https:// | ||
- | |||
- | |||
- | |||
- | ====== Previous Requirements ====== | ||
- | |||
- | Basic knowledge of: | ||
- | * [[cloud-init: | ||
- | * [[salt-stack: | ||
- | |||
- | |||
- | ====== Variables used in this documentation ====== | ||
- | |||
- | ^ Name ^ Description ^ Sample ^ | ||
- | | '' | ||
- | | '' | ||
- | |||
- | |||
- | ====== Deploy vm's ====== | ||
- | |||
- | [[cloud-init: | ||
- | |||
- | We use: | ||
- | * 3 OSD's: disk servers | ||
- | * 1 vm as admin | ||
- | * 1 vm as monitoring | ||
- | * 1 vm as gateway | ||
- | |||
- | For disk servers: | ||
- | <code bash> | ||
- | bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -d 40GB -m 20 -O -r 4096 -v 2 -o 2 | ||
- | </ | ||
- | \\ | ||
- | For adm & monitoring servers: | ||
- | <code bash> | ||
- | bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 2048 -v 1 -o 2 | ||
- | </ | ||
- | \\ | ||
- | For gateway: | ||
- | <code bash> | ||
- | bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 4096 -v 2 -o 4 | ||
- | </ | ||
- | |||
- | |||
- | ====== Run salt basic states ====== | ||
- | |||
- | - Connect to salt-master | ||
- | - Run postgresql '' | ||
- | <code bash> | ||
- | salt " | ||
- | salt " | ||
- | </ | ||
- | |||
- | |||
- | ====== Additional steps with salt ====== | ||
- | |||
- | ===== Install yum-plugin-priorities ===== | ||
- | |||
- | In all the servers: | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | |||
- | ===== Install ceph-deploy | ||
- | |||
- | In the adm: | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | |||
- | ===== Add ceph user ===== | ||
- | |||
- | In all the servers: | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | Check: | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | |||
- | ===== Add ceph user to sudoers | ||
- | In all the servers: | ||
- | <code bash> | ||
- | salt " | ||
- | "ceph ALL = (root) NOPASSWD: | ||
- | </ | ||
- | |||
- | Check: | ||
- | <code bash> | ||
- | salt " | ||
- | salt " | ||
- | </ | ||
- | |||
- | ===== Generate ssh keys ===== | ||
- | All the servers: | ||
- | <code bash> | ||
- | salt " | ||
- | " | ||
- | runas=ceph | ||
- | </ | ||
- | |||
- | ===== Populate ssh keys ===== | ||
- | |||
- | Get pub keys, all servers: | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | |||
- | Populate '' | ||
- | <code bash> | ||
- | salt " | ||
- | while read LINE ; do salt " | ||
- | </ | ||
- | |||
- | ====== Deploy CEPH ====== | ||
- | <WRAP center round tip 60%> | ||
- | ceph actions are run on the adm node, not in salt-master | ||
- | </ | ||
- | |||
- | ===== Software install ===== | ||
- | |||
- | ON admin node: | ||
- | <code bash> | ||
- | su - ceph | ||
- | mkdir ~/ | ||
- | cd ~/ | ||
- | </ | ||
- | |||
- | export '' | ||
- | <code bash> | ||
- | export LISTOFSERVERS=" | ||
- | </ | ||
- | <WRAP center round important 60%> | ||
- | gateway node is not included on this list! | ||
- | </ | ||
- | |||
- | |||
- | <WRAP center round important 60%> | ||
- | hammer is the latest version available for RHEL6!! | ||
- | </ | ||
- | And run install: | ||
- | <code bash> | ||
- | ceph-deploy install ${LISTOFSERVERS} --repo-url https:// | ||
- | </ | ||
- | |||
- | Nautilus is the latest for RHEL7: | ||
- | <code bash> | ||
- | ceph-deploy install ${LISTOFSERVERS} --repo-url https:// | ||
- | </ | ||
- | |||
- | \\ | ||
- | \\ | ||
- | Wait for '' | ||
- | \\ | ||
- | \\ | ||
- | |||
- | ===== Deploy monitoring node ===== | ||
- | Export '' | ||
- | <code bash> | ||
- | ########################## | ||
- | # monitoring server | ||
- | ########################## | ||
- | THESERVER=" | ||
- | </ | ||
- | \\ | ||
- | Deploy it: | ||
- | <code bash> | ||
- | ceph-deploy new ${THESERVER} | ||
- | ceph-deploy mon create-initial | ||
- | ceph-deploy gatherkeys ${THESERVER} | ||
- | </ | ||
- | |||
- | |||
- | Enable [[http:// | ||
- | <code bash> | ||
- | sudo ceph mon enable-msgr2 | ||
- | </ | ||
- | |||
- | |||
- | ===== Deploy manager node ===== | ||
- | <WRAP center round important 60%> | ||
- | Only for luminous+ version (version> | ||
- | </ | ||
- | |||
- | Export '' | ||
- | <code bash> | ||
- | ########################## | ||
- | # monitoring server | ||
- | ########################## | ||
- | LISTOFSERVERS=" | ||
- | </ | ||
- | \\ | ||
- | Deploy it: | ||
- | <code bash> | ||
- | for THESERVER in ${LISTOFSERVERS} ; do | ||
- | ceph-deploy mgr create ${THESERVER} | ||
- | done | ||
- | </ | ||
- | |||
- | ==== Dashboard plugin (for manager) | ||
- | |||
- | [[ceph: | ||
- | ==== Enable PG autoscale (plugin for manager) | ||
- | |||
- | [[ceph: | ||
- | |||
- | ===== Deploy disk nodes ===== | ||
- | |||
- | export '' | ||
- | <code bash> | ||
- | export LISTOFSERVERS=" | ||
- | </ | ||
- | |||
- | \\ | ||
- | Check disks: | ||
- | <code bash> | ||
- | ########################## | ||
- | # STORAGE SERVERS | ||
- | ########################## | ||
- | #check | ||
- | for THESERVER in ${LISTOFSERVERS} ; do | ||
- | echo " | ||
- | ceph-deploy disk list " | ||
- | done | ||
- | </ | ||
- | |||
- | |||
- | ==== new versions ==== | ||
- | For old versions (version> | ||
- | Create ceph filesystems: | ||
- | <code bash> | ||
- | # deploy storage nodes | ||
- | for THESERVER in ${LISTOFSERVERS} ; do | ||
- | echo " | ||
- | echo "################### | ||
- | | ||
- | echo "################### | ||
- | | ||
- | done | ||
- | </ | ||
- | |||
- | |||
- | ==== Old versions ==== | ||
- | For old versions (version< | ||
- | Create ceph filesystems: | ||
- | <code bash> | ||
- | # deploy storage nodes | ||
- | for THESERVER in ${LISTOFSERVERS} ; do | ||
- | echo " | ||
- | ceph-deploy disk zap ${THESERVER}:/ | ||
- | ceph-deploy osd prepare ${THESERVER}:/ | ||
- | ceph-deploy osd activate ${THESERVER}:/ | ||
- | ceph-deploy disk list ${THESERVER} | ||
- | echo "press enter to continue" | ||
- | read | ||
- | done | ||
- | </ | ||
- | |||
- | |||
- | ===== [OPTIONAL] Deploy admin on all nodes (except gateway) ===== | ||
- | export '' | ||
- | <code bash> | ||
- | export LISTOFSERVERS=" | ||
- | </ | ||
- | |||
- | And deploy it: | ||
- | <code bash> | ||
- | ########################## | ||
- | # Deploy admin on all the nodes | ||
- | |||
- | ceph-deploy admin ${LISTOFSERVERS} | ||
- | </ | ||
- | \\ | ||
- | \\ | ||
- | Check keyring (this is a salt command if you don't noticed it): | ||
- | <code bash> | ||
- | salt " | ||
- | </ | ||
- | |||
- | ====== Deploy gateway ====== | ||
- | Doc: | ||
- | * http:// | ||
- | |||
- | Deploy rados from ceph adm: | ||
- | <code bash> | ||
- | THESERVER=" | ||
- | |||
- | ceph-deploy install --rgw ${THESERVER} --repo-url https:// | ||
- | ceph-deploy admin ${THESERVER} | ||
- | ssh ${THESERVER} "sudo yum -y install ceph-radosgw" | ||
- | ceph-deploy rgw create ${THESERVER} | ||
- | </ | ||
- | |||
- | |||
- | Make 80 as the default port: | ||
- | <code bash> | ||
- | cat >> | ||
- | [client.rgw.${THESERVER}] | ||
- | rgw_frontends = " | ||
- | rgw_thread_pool_size = 100 | ||
- | EOF | ||
- | ceph-deploy --overwrite-conf config push ${THESERVER} | ||
- | </ | ||
- | |||
- | Restart radosgw: | ||
- | <code bash> | ||
- | ssh ${THESERVER} "sudo systemctl restart ceph-radosgw" | ||
- | ssh ${THESERVER} "sudo systemctl status ceph-radosgw" | ||
- | </ | ||
- | |||
- | check (gw node): | ||
- | <code bash> | ||
- | ssh ${THESERVER} "sudo radosgw-admin zone get" | ||
- | </ | ||
- | |||
- | Sample: | ||
- | <code bash> | ||
- | ceph@demoenv-cephadm-001 ~/ | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | }, | ||
- | " | ||
- | { | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | " | ||
- | } | ||
- | } | ||
- | ] | ||
- | } | ||
- | </ | ||
- | |||
- | |||
- | ====== Deploying MDS ====== | ||
- | |||
- | <code bash> | ||
- | for i in 1 3 ; do bash CloneWars.sh -F -c datacenter01 -i 10.20.55.1${i} -v 4 -o 2 -r 4096 -O -m 20 -h demoenv-cephFS-00${i} | ||
- | for i in 2 4 ; do bash CloneWars.sh -F -c datacenter02 -i 10.20.55.1${i} -v 4 -o 2 -r 4096 -O -m 20 -h demoenv-cephFS-00${i} | ||
- | </ | ||
- | |||
- | <code bash> | ||
- | export THESERVER=" | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | salt " | ||
- | export THESERVER=" | ||
- | salt " | ||
- | while read LINE ; do salt " | ||
- | </ | ||
- | |||
- | From osm-001: | ||
- | <code bash> | ||
- | for i in ${MDSSERVERS} ; do scp ceph.repo ${i}:/ | ||
- | export MDSSERVERS=" | ||
- | export LISTOFSERVERS=${MDSSERVERS} | ||
- | ceph-deploy install ${LISTOFSERVERS} | ||
- | ceph-deploy mds create ${LISTOFSERVERS} | ||
- | </ | ||
- | |||
- | <WRAP center round info 60%> | ||
- | mds information will appear after the creation of the cephfs | ||
- | </ | ||
- | |||
- | <code bash> | ||
- | export POOL_NAME=" | ||
- | ceph osd pool create ${POOL_NAME} 128 128 replicated | ||
- | ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule | ||
- | ceph osd pool set ${POOL_NAME} compression_algorithm snappy | ||
- | ceph osd pool set ${POOL_NAME} compression_mode aggressive | ||
- | ceph osd pool set ${POOL_NAME} compression_min_blob_size 10240 | ||
- | ceph osd pool set ${POOL_NAME} compression_max_blob_size 4194304 | ||
- | ceph osd pool set ${POOL_NAME} pg_autoscale_mode on | ||
- | export POOL_NAME=" | ||
- | |||
- | ceph osd pool create ${POOL_NAME} 128 128 replicated | ||
- | ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule | ||
- | ceph osd pool set ${POOL_NAME} pg_autoscale_mode on | ||
- | |||
- | ceph fs new cephfs cephfs_metadata-ftp cephfs_data-ftp | ||
- | ceph fs ls | ||
- | ceph -s | ||
- | ceph mds stat | ||
- | </ | ||
- | |||
- | |||
- | |||
- | |||
- | ====== Troubleshooting ====== | ||
- | ===== Error deploying radosgw ===== | ||
- | [[ceph: | ||
- | ===== Completely remove OSD from cluster ===== | ||
- | [[ceph: |