User Tools

Site Tools


ceph:deploy_ceph_cluster_for_humans

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
ceph:deploy_ceph_cluster_for_humans [2019/07/23 07:48] dodgerceph:deploy_ceph_cluster_for_humans [Unknown date] (current) – removed - external edit (Unknown date) 127.0.0.1
Line 1: Line 1:
-====== Deploying a ceph cluster ====== 
  
-^  Documentation  ^| 
-^Name:| Deploying a ceph cluster | 
-^Description:| Steps to deploy a ceph cluster | 
-^Modification date :|21/11/2018| 
-^Owner:|dodger@ciberterminal.net| 
-^Notify changes to:|Owner | 
-^Tags:|ceph, object storage | 
- 
- 
-====== External documentation  ====== 
- 
-  * [[https://www.howtoforge.com/tutorial/how-to-build-a-ceph-cluster-on-centos-7/|How to build a Ceph Distributed Storage Cluster on CentOS 7 ]] 
-  * https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1577519 
-  * https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/installation_guide_for_rhel_x86_64/ 
- 
- 
- 
-====== Previous Requirements ====== 
- 
-Basic knowledge of: 
-  * [[cloud-init:deploy_new_vm|[HOWTO] Deploy any VM with Cloud-init]] 
-  * [[salt-stack:running_commands|Running commands with SALT]] 
- 
- 
-====== Variables used in this documentation ====== 
- 
-^ Name ^ Description ^ Sample ^ 
-| ''${THESERVER}'' | Variable used as salt target, it can be a mask of serves (see sample) | <code bash>export THESERVER="demoenv-ceph*-00*"</code> 
-| ''${LISTOFSERVERS}'' | Variable used as ''ceph-deploy'' target  | <code bash>export LISTOFSERVERS="demoenv-cephm-001 demoenv-cephd-001"</code> 
- 
- 
-====== Deploy vm's ====== 
- 
-[[cloud-init:project_clonewars|[SCRIPT] Project CloneWars.sh]] 
- 
-We use: 
-  * 3 OSD's: disk servers 
-  * 1 vm as admin 
-  * 1 vm as monitoring 
-  * 1 vm as gateway 
- 
-For disk servers: 
-<code bash> 
-bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -d 40GB -m 20 -O -r 4096 -v 2 -o 2 
-</code> 
-\\ 
-For adm & monitoring servers: 
-<code bash> 
-bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 2048 -v 1 -o 2 
-</code> 
-\\ 
-For gateway: 
-<code bash> 
-bash CloneWars.sh -c nuclu -h ${THESERVER} -i ${THESERVERIP} -m 20 -O -r 4096 -v 2 -o 4 
-</code> 
- 
- 
-====== Run salt basic states ====== 
- 
-  - Connect to salt-master 
-  - Run postgresql ''sls'' 
-<code bash> 
-salt "${THESERVER}" state.apply 
-salt "${THESERVER}" state.apply nsupdate 
-</code> 
- 
- 
-====== Additional steps with salt ====== 
- 
-===== Install yum-plugin-priorities ===== 
- 
-In all the servers: 
-<code bash> 
-salt "${THESERVER}" pkg.install yum-plugin-priorities 
-</code> 
- 
-===== Install ceph-deploy  ===== 
- 
-In the adm: 
-<code bash> 
-salt "${THESERVER}" pkg.install ceph-deploy 
-</code> 
- 
-===== Add ceph user  ===== 
- 
-In all the servers: 
-<code bash> 
-salt "${THESERVER}" user.add ceph 1002 
-</code> 
-Check: 
-<code bash> 
-salt "${THESERVER}" user.info ceph 
-</code> 
- 
-===== Add ceph user to sudoers  ===== 
-In all the servers: 
-<code bash> 
-salt "${THESERVER}" file.write /etc/sudoers.d/ceph \ 
-"ceph ALL = (root) NOPASSWD:ALL" 
-</code> 
- 
-Check: 
-<code bash> 
-salt "${THESERVER}" cmd.run 'cat /etc/sudoers.d/ceph' 
-salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph 
-</code> 
- 
-===== Generate ssh keys ===== 
-All the servers: 
-<code bash> 
-salt "${THESERVER}" cmd.run \ 
-    "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa" \ 
-    runas=ceph 
-</code> 
- 
-===== Populate ssh keys ===== 
- 
-Get pub keys, all servers: 
-<code bash> 
-salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^b" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt 
-</code> 
- 
-Populate ''authorized_keys'' in all servers: 
-<code bash> 
-salt "${THESERVER}" file.copy /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys 
-while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_oss.txt 
-</code> 
- 
-====== Deploy CEPH ====== 
-<WRAP center round tip 60%> 
-ceph actions are run on the adm node, not in salt-master 
-</WRAP> 
- 
-===== Software install ===== 
- 
-ON admin node: 
-<code bash> 
-su - ceph 
-mkdir ~/ceph-deploy 
-cd ~/ceph-deploy 
-</code> 
- 
-export ''LISTOFSERVERS'' variable, for example: 
-<code bash> 
-export LISTOFSERVERS="demoenv-cephadm-101 demoenv-cephm-101 demoenv-cephd-101 demoenv-cephd-102 demoenv-cephd-103" 
-</code> 
-<WRAP center round important 60%> 
-gateway node is not included on this list! 
-</WRAP> 
- 
- 
-<WRAP center round important 60%> 
-hammer is the latest version available for RHEL6!! 
-</WRAP> 
-And run install: 
-<code bash> 
-ceph-deploy install ${LISTOFSERVERS} --repo-url https://download.ceph.com/rpm-hammer/el7/ 
-</code> 
- 
-Nautilus is the latest for RHEL7: 
-<code bash> 
-ceph-deploy install ${LISTOFSERVERS} --repo-url https://download.ceph.com/rpm-nautilus/el7/ 
-</code> 
- 
-\\ 
-\\ 
-Wait for ''ceph-deploy'' to finish its jobs (will take some time). 
-\\ 
-\\ 
- 
-===== Deploy monitoring node ===== 
-Export ''THESERVER'' var: 
-<code bash> 
-########################## 
-# monitoring server 
-########################## 
-THESERVER="demoenv-cephm-001" 
-</code> 
-\\ 
-Deploy it: 
-<code bash> 
-ceph-deploy new ${THESERVER} 
-ceph-deploy mon create-initial 
-ceph-deploy gatherkeys ${THESERVER} 
-</code> 
- 
- 
-Enable [[http://docs.ceph.com/docs/master/rados/configuration/msgr2/|messenger v2]] protocol: 
-<code bash> 
-sudo ceph mon enable-msgr2 
-</code> 
- 
- 
-===== Deploy manager node ===== 
-<WRAP center round important 60%> 
-Only for luminous+ version (version>12.x) 
-</WRAP> 
- 
-Export ''THESERVER'' var: 
-<code bash> 
-########################## 
-# monitoring server 
-########################## 
-LISTOFSERVERS="demoenv-cephm-001 demoenv-cephm-002" 
-</code> 
-\\ 
-Deploy it: 
-<code bash> 
-for THESERVER in ${LISTOFSERVERS} ; do 
-    ceph-deploy mgr create ${THESERVER} 
-done 
-</code> 
- 
-==== Dashboard plugin (for manager)  ==== 
- 
-[[ceph:howtos:ceph_dashboard|[HOWTO] Setup Ceph Dasboard]] 
-==== Enable PG autoscale (plugin for manager)  ==== 
- 
-[[ceph:howtos:autoscaling_pgs|[HOWTO] Enable PG autoscale]] 
- 
-===== Deploy disk nodes ===== 
- 
-export ''LISTOFSERVERS'' variable, for example: 
-<code bash> 
-export LISTOFSERVERS="demoenv-cephd-001 demoenv-cephd-002 demoenv-cephd-003" 
-</code> 
- 
-\\ 
-Check disks: 
-<code bash> 
-########################## 
-# STORAGE SERVERS 
-########################## 
-#check 
-for THESERVER in ${LISTOFSERVERS} ; do 
-    echo "${THESERVER}" 
-    ceph-deploy disk list "${THESERVER}" 
-done 
-</code> 
- 
- 
-==== new versions ==== 
-For old versions (version>12.x): 
-Create ceph filesystems: 
-<code bash> 
-# deploy storage nodes 
-for THESERVER in ${LISTOFSERVERS} ; do 
-     echo "${THESERVER}";    
-     echo "################### ${THESERVER}: creating disk" ;   
-     ceph-deploy osd create ${THESERVER} --data /dev/sdb;   
-     echo "################### ${THESERVER}: listing (check) disk" ;     
-     ceph-deploy osd list ${THESERVER};   
-done 
-</code> 
- 
- 
-==== Old versions ==== 
-For old versions (version<12.x): 
-Create ceph filesystems: 
-<code bash> 
-# deploy storage nodes 
-for THESERVER in ${LISTOFSERVERS} ; do 
-    echo "${THESERVER}" 
-    ceph-deploy disk zap ${THESERVER}:/dev/sdb 
-    ceph-deploy osd prepare ${THESERVER}:/dev/sdb 
-    ceph-deploy osd activate ${THESERVER}:/dev/sdb1 
-    ceph-deploy disk list ${THESERVER} 
-    echo "press enter to continue" 
-    read 
-done 
-</code> 
- 
- 
-===== [OPTIONAL] Deploy admin on all nodes (except gateway) ===== 
-export ''LISTOFSERVERS'' variable, for example: 
-<code bash> 
-export LISTOFSERVERS="demoenv-cephadm-101 demoenv-cephm-101 demoenv-cephd-101 demoenv-cephd-102 demoenv-cephd-103" 
-</code> 
- 
-And deploy it: 
-<code bash> 
-########################## 
-# Deploy admin on all the nodes 
- 
-ceph-deploy admin ${LISTOFSERVERS} 
-</code> 
-\\ 
-\\ 
-Check keyring (this is a salt command if you don't noticed it): 
-<code bash> 
-salt "${THESERVER}"  file.check_perms /etc/ceph/ceph.client.admin.keyring '{}' root root 644 
-</code> 
- 
-====== Deploy gateway ====== 
-Doc: 
-  * http://docs.ceph.com/docs/mimic/install/install-ceph-gateway/ 
- 
-Deploy rados from ceph adm: 
-<code bash> 
-THESERVER="demoenv-cephgw-001" 
- 
-ceph-deploy install --rgw ${THESERVER} --repo-url https://download.ceph.com/rpm-hammer/el7/ 
-ceph-deploy admin ${THESERVER} 
-ssh ${THESERVER} "sudo yum -y install ceph-radosgw" 
-ceph-deploy rgw create ${THESERVER} 
-</code> 
- 
- 
-Make 80 as the default port: 
-<code bash> 
-cat >>ceph.conf<<EOF 
-[client.rgw.${THESERVER}] 
-rgw_frontends = "civetweb port=80" 
-rgw_thread_pool_size = 100 
-EOF 
-ceph-deploy --overwrite-conf config push ${THESERVER} 
-</code> 
- 
-Restart radosgw: 
-<code bash> 
-ssh ${THESERVER} "sudo systemctl restart ceph-radosgw" 
-ssh ${THESERVER} "sudo systemctl status ceph-radosgw" 
-</code> 
- 
-check (gw node): 
-<code bash> 
-ssh ${THESERVER} "sudo radosgw-admin zone get" 
-</code> 
- 
-Sample: 
-<code bash> 
-ceph@demoenv-cephadm-001 ~/ceph-deploy $ ssh ${THESERVER} "sudo radosgw-admin zone get" 
-{ 
-    "domain_root": ".rgw", 
-    "control_pool": ".rgw.control", 
-    "gc_pool": ".rgw.gc", 
-    "log_pool": ".log", 
-    "intent_log_pool": ".intent-log", 
-    "usage_log_pool": ".usage", 
-    "user_keys_pool": ".users", 
-    "user_email_pool": ".users.email", 
-    "user_swift_pool": ".users.swift", 
-    "user_uid_pool": ".users.uid", 
-    "system_key": { 
-        "access_key": "", 
-        "secret_key": "" 
-    }, 
-    "placement_pools": [ 
-        { 
-            "key": "default-placement", 
-            "val": { 
-                "index_pool": ".rgw.buckets.index", 
-                "data_pool": ".rgw.buckets", 
-                "data_extra_pool": ".rgw.buckets.extra" 
-            } 
-        } 
-    ] 
-} 
-</code> 
- 
- 
-====== Deploying MDS ====== 
- 
-<code bash> 
-for i in 1 3  ; do bash CloneWars.sh -F -c datacenter01 -i 10.20.55.1${i} -v 4 -o 2  -r 4096 -O -m 20 -h demoenv-cephFS-00${i}  ; done 
-for i in 2 4  ; do bash CloneWars.sh -F -c datacenter02 -i 10.20.55.1${i} -v 4 -o 2  -r 4096 -O -m 20 -h demoenv-cephFS-00${i}  ; done 
-</code> 
- 
-<code bash> 
-export THESERVER="demoenv-cephfs*.ciberterminal.net" 
-salt "${THESERVER}" state.apply  
-salt "${THESERVER}" state.apply nsupdate 
-salt "${THESERVER}" pkg.install yum-plugin-priorities 
-salt "${THESERVER}" user.add ceph 1002 
-salt "${THESERVER}" file.write /etc/sudoers.d/ceph "ceph ALL = (root) NOPASSWD:ALL" 
-salt "${THESERVER}" cmd.run "cat /etc/sudoers.d/ceph" 
-salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph 
-salt "${THESERVER}" cmd.run     "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa"     runas=ceph 
-salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^a" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt 
-export THESERVER="demoenv-ceph*.ciberterminal.net" 
-salt "${THESERVER}" file.copy /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys 
-while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_demoenv-ceph.txt 
-</code> 
- 
-From osm-001: 
-<code bash> 
-for i in ${MDSSERVERS} ; do scp ceph.repo ${i}:/home/ceph/ ; ssh ${i} "sudo mv /home/ceph/ceph.repo /etc/yum.repos.d/"  ; done 
-export MDSSERVERS="demoenv-cephfs-002.ciberterminal.net demoenv-cephfs-001.ciberterminal.net demoenv-cephfs-004.ciberterminal.net demoenv-cephfs-003.ciberterminal.net" 
-export LISTOFSERVERS=${MDSSERVERS} 
-ceph-deploy install ${LISTOFSERVERS} 
-ceph-deploy mds create ${LISTOFSERVERS} 
-</code> 
- 
-<WRAP center round info 60%> 
-mds information will appear after the creation of the cephfs 
-</WRAP> 
- 
-<code bash> 
-export POOL_NAME="cephfs_data-ftp" 
-ceph osd pool create ${POOL_NAME} 128 128 replicated 
-ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule   
-ceph osd pool set ${POOL_NAME} compression_algorithm snappy 
-ceph osd pool set ${POOL_NAME} compression_mode aggressive 
-ceph osd pool set ${POOL_NAME} compression_min_blob_size 10240 
-ceph osd pool set ${POOL_NAME} compression_max_blob_size 4194304 
-ceph osd pool set ${POOL_NAME} pg_autoscale_mode on 
-export POOL_NAME="cephfs_metadata-ftp" 
- 
-ceph osd pool create ${POOL_NAME} 128 128 replicated 
-ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule  
-ceph osd pool set ${POOL_NAME} pg_autoscale_mode on 
- 
-ceph fs new cephfs cephfs_metadata-ftp cephfs_data-ftp 
-ceph fs ls 
-ceph -s 
-ceph mds stat 
-</code> 
- 
- 
- 
- 
-====== Troubleshooting ====== 
-===== Error deploying radosgw ===== 
-[[ceph:troubleshooting:error_deploying_gateway|[TROUBLESHOOT] Error deploying radosgw]] 
-===== Completely remove OSD from cluster ===== 
-[[ceph:howtos:remove_osd|[HOWTO] Completely remove OSD from cluster]] 
ceph/deploy_ceph_cluster_for_humans.1563868117.txt.gz · Last modified: 2019/07/23 07:48 by dodger