====== [HOWTO] Completely remove OSD from cluster ====== ^ Documentation ^| ^Name:| [HOWTO] Completely remove OSD from cluster | ^Description:| HOW TO Completely remove OSD from cluster (hardcore edition) | ^Modification date :|07/06/2018| ^Owner:|dodger| ^Notify changes to:|Owner | ^Tags:|ceph, object storage | ^Scalate to:|The_fucking_bofh| ====== Official documentation ====== * [[https://access.redhat.com/solutions/1979363|RH support link]] (you'll need rhnetwork user/pass) ====== Instructions ====== Instructions are copy/paste ===== What are the steps to remove an OSD from an RHCS cluster? ===== There can be various reasons to remove an OSD from an RHCS cluster, and the following steps show how to do it:\\ ==== 1. Stop the OSD process ==== sudo /etc/init.d/ceph stop osd.{X} # where {X} is the OSD number or: sudo systemctl stop ceph-osd@{X} # where {X} is the OSD number \\ ==== 2.Mark the OSD out ==== ceph osd out {X} # where {X} is the OSD number \\ ==== 3.Confirm if the OSD is indeed marked out with ==== ceph -w Important: Wait for data rebalance to complete and all the PG's to come back active+clean, before moving to next steps. \\ ==== 4. [OPTIONAL] Remove the OSD from the crush map ==== Step 4 is only needed if OSD is getting permanently removed from the cluster and will not be redeployed ceph osd crush remove osd.{X} If OSD is being redeployed and because of that need removal then skip step 4 it will keep OSD in the CRUSH map as DNE and will not cause data rebalance because of CRUSH map update. Only addition of OSD will cause data movement. \\ ==== 5.Remove the authentication key for the OSD ==== ceph auth del osd.{X} \\ ==== 6.Confirm that the keys for OSD.{X} are not listed ==== ceph auth list \\ ==== 7.Remove the OSD from the OSD map ==== ceph osd rm {X} \\ ==== 8.If the procedure is used for replacing the OSD disk then please unmount this osd disk which you want to replace ==== umount /var/lib/ceph/osd/ceph-{x}