User Tools

Site Tools


ceph:troubleshooting:too_few_pgs_per_osd

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ceph:troubleshooting:too_few_pgs_per_osd [2019/07/18 07:17] (current)
Line 1: Line 1:
 +====== [TROUBLESHOOT] Ceph too few pgs per osd ======
 +
 +^  Documentation ​ ^|
 +^Name:| [TROUBLESHOOT] Ceph too few pgs per osd |
 +^Description:​| how to solve this "​issue"​ |
 +^Modification date :​|13/​06/​2019|
 +^Owner:​|dodger@ciberterminal.net|
 +^Notify changes to:|Owner |
 +^Tags:​|ceph,​ object storage |
 +
 +
 +====== WARNING ======
 +
 +<WRAP center round important 60%>
 +This documents cover the **TOO FEW** PGs per OSD, not //too many// (documented [[ceph:​troubleshooting:​too_many_pgs_per_osd|here]])
 +</​WRAP>​
 +
 +
 +====== The error ======
 +
 +<code bash>
 +ceph@avmlp-osm-001 ~/​ceph-deploy $ sudo ceph health
 +HEALTH_WARN too few PGs per OSD (4 < min 30)
 +</​code>​
 +
 +Official Documentation:​ [[http://​docs.ceph.com/​docs/​master/​rados/​operations/​health-checks/#​many-objects-per-pg]]
 +
 +
 +
 +====== The solution ======
 +The problem is that there'​s //any// pool with less PGs than configured as WARNING threshold.
 +
 +<code bash>
 +ceph@avmlp-osm-001 ~/​ceph-deploy $ sudo ceph osd pool ls detail
 +pool 1 '​.rgw.root'​ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
 +pool 2 '​default.rgw.control'​ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 173 flags hashpspool stripe_width 0 application rgw
 +pool 3 '​default.rgw.meta'​ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 175 flags hashpspool stripe_width 0 application rgw
 +pool 4 '​default.rgw.log'​ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 176 flags hashpspool stripe_width 0 application rgw
 +</​code>​
 +
 +Just change the ''​pg_num'',​ ''​pgp_num''​ values:
 +<code bash>
 +ceph osd pool set POOL_NAME pg_num_actual 500
 +</​code>​
 +
 +All pool at once:
 +<code bash>
 +for i in $(ceph osd pool ls) ; do  ceph osd pool set ${i} pgp_num 60 ; done
 +</​code>​
 +
 +**WARNING** a huge number of PGs in all the pools can lead in error:\\
 +<​code>​
 +Error ERANGE: pool id 4 pg_num 500 size 3 would mean 6000 total pgs, which exceeds max 5000 (mon_max_pg_per_osd 250 * num_in_osds 20)
 +</​code>​
 +See [[ceph:​troubleshooting:​erange_total_pg_exceeded|[TROUBLESHOOT] Error ERANGE: pool id X pg_num Y size Z would mean W total pgs, which exceeds]]
 +
  
ceph/troubleshooting/too_few_pgs_per_osd.txt ยท Last modified: 2019/07/18 07:17 (external edit)