====== CEPH: Cheatsheet! ======
^ Documentation ^|
^Name:| CEPH: Cheatsheet! |
^Description:| CHEATSHEET |
^Modification date :|25/01/2019|
^Owner:|dodger|
^Notify changes to:|Owner|
^Tags:| ceph, object storage|
^Scalate to:|The_fucking_bofh|
====== Cheatsheet ======
===== Admin =====
==== Health ====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph health
| Status of the cluster |
|ceph health detail
| Detailed status of the cluster |
|ceph -s
| Another way to see the status |
==== disk ====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph df
| ''df'' of the cluster |
|ceph df detail
| Detailed usage of the cluster |
===== Object Gateway =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|radosgw-admin metadata list user
| list current created users |
|radosgw-admin --uid=user1 user info
| Show user info |
|radosgw-admin --uid=user1 user suspend
| Suspend user account |
|radosgw-admin bucket list
| list ALL buckets |
|radosgw-admin bucket list --uid=userID
| list buckets from user |
|radosgw-admin bucket rm --bucket=mybucket --purge-objects
| Drop bucket INCLUDING CONTENTS |
===== MON =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph mon_status -f json-pretty
| Local monitor status |
|ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.${THESERVER}.asok mon_status -f json-pretty
| Detailed mon server status, change ''${THESERVER}'' variable |
===== OSD =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph osd df
| osd disk usage (aka ''df'')|
===== CRUSH =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph osd crush tree
| view the CRUSH hierarchy |
|ceph osd crush rule ls
| view the CRUSH rules |
|ceph osd crush rule dump
| view the CRUSH rules definition |
|ceph osd getcrushmap -o crushmap.bin
| write //live// crushmap to ''crushmap.bin'' |
|crushtool -d crushmap.bin -o crushmap.txt
| Decompile ''crushmap.bin'' |
|crushtool -c crushmap.txt -o crushmap.bin.new
| Compile crushmap |
|ceph osd setcrushmap -i crushmap.bin.new
| Push new CRUSHmap to cluster |
===== POOL =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph osd pool ls
| list pools |
|ceph osd pool ls detail
| list pools with additional info |
|ceph osd crush rule dump
| view the CRUSH rules definition |
|ceph osd pool create ${new_pool}
| Create ''${new_pool}'' |
|ceph osd pool create ${new_pool} 500 500 replicated ciberterminalRule
| Create pool with pg_num, pgp_num, type and crush rule |
|rados cppool ${old_pool} ${new_pool}
| Copy one pool to another |
|ceph osd pool rename ${new_pool} ${old_pool}
| rename pool |
|ceph osd pool ls
| list pools |
|ceph osd pool ls
| list pools |
|ceph osd pool set ${POOL_NAME} pg_num 500
| Change pg's of a pool |
|ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule
| Change pg's CRUSH map |
|ceph osd pool get ${POOL_NAME} all
| get options set for ''${POOL_NAME}'' |
===== MGR =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph mgr services
| running services |
|ceph mgr module ls
| list modules for ''mgr |
===== MDS (cehpfs) =====
^ ##############''cmd''############## ^ ------------------------------------------------------------------ ^
|ceph fs ls
| list filesystems |
|ceph mgr module ls
| list modules for ''mgr |