User Tools

Site Tools


ceph:from_scratch

Deploying a ceph cluster (RAW MODE)

Documentation
Name: Deploying a ceph cluster (RAW MODE)
Description: raw commands of a live CEPH storage cluster deployment
Modification date :24/10/2018
Owner:dodger@ciberterminal.net
Notify changes to:Owner
Tags:ceph, object storage

External documentation

RAW COMMANDS (DEV)

cloud-init:

export THENETWORK=N_10.90.48.0-20_V948_devDB
export THEHOSTNAME=bvmld-osadm-101
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,	10.90.49.101,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > cloud-init.${THEHOSTNAME^^}.yaml
 
export THEHOSTNAME=bvmld-osd-101
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,	10.90.49.102,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > cloud-init.${THEHOSTNAME^^}.yaml
 
export THEHOSTNAME=bvmld-osd-102
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,	10.90.49.103,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > cloud-init.${THEHOSTNAME^^}.yaml
 
export THEHOSTNAME=bvmld-osd-103
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,	10.90.49.104,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > cloud-init.${THEHOSTNAME^^}.yaml
 
export THEHOSTNAME=bvmld-osm-101
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,	10.90.49.105,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > cloud-init.${THEHOSTNAME^^}.yaml
 
 
cat>>doall.sh<<EOF
THENETWORK=N_10.90.48.0-20_V948_devDB
 
HOSTLIST="BVMLD-OSADM-101 BVMLD-OSD-101 BVMLD-OSD-102 BVMLD-OSD-103 BVMLD-OSM-101" 
for THEHOSTNAME in ${HOSTLIST} ; do
	acli uhura.vm.clone_with_customize ${THEHOSTNAME^^} clone_from_vm=TMPL-CentOS7.1804_v002 cloudinit_userdata_path=file:///home/nutanix/bvmld-oss/cloud-init.${THEHOSTNAME^^}.yaml container="Container01"
	acli vm.nic_delete ${THEHOSTNAME^^} $(acli vm.nic_list ${THEHOSTNAME^^}| egrep -v "^Mac Address"|awk '{print $1}')
	acli vm.nic_create ${THEHOSTNAME^^} network="${THENETWORK}"
	acli vm.on ${THEHOSTNAME^^} 
done
EOF
 
bash -x doall.sh
 
 
for THEHOSTNAME  in BVMLD-OSD-10{1..3} ; do acli vm.disk_create ${THEHOSTNAME} bus="scsi" cdrom="false" container="Container01" create_size="30G" ; done

on salt-master after accepting minions:

salt 'bvmld-os*-10[1-3].ciberterminal.net' pkg.install yum-plugin-priorities
salt 'bvmld-osadm-101*' pkg.install ceph-deploy
salt 'bvmld-os*-10[1-3].ciberterminal.net' user.add ceph 1002
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "id ceph"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "echo 'ceph ALL = (root) NOPASSWD:ALL' >>/etc/sudoers"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "tail -2 /etc/sudoers"

on adm:

ceph@bvmld-osadm-101 ~ $ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pelyKY1W4iKnzuk/9B/GVV6XOgg/R1naxiSVOaUSZ1c ceph@bvmld-osadm-101.ciberterminal.net
The key's randomart image is:
+---[RSA 2048]----+
|            ..++E|
|             +O*.|
|         .. o+o=o|
|         +ooooo. |
|      . S .+.+   |
|    .. B o  o .  |
|  ..o.* O        |
| . =.o.= .       |
| o*... ..        |
+----[SHA256]-----+

On salt-master, populating the adm key to the rest of the nodes:

salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "mkdir /home/ceph/.ssh"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "chown ceph. -R /home/ceph/.ssh"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "chmod 700 /home/ceph/.ssh"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCjlbhQsRT5KtTcNvtqQf928sNuCDtOTYxTODKuyQK5jDjOwwMuAG8PlkFQiAsdRbY0vmf+qvpPNWIS7YUAjQrUXqCJXZt9aRJPBG2LglXDCjxzqURnc8wHqGb3XOnmmk4hv0Ke9eJxZz2xLr7FFSvGGouaq8lrN5FP3LISI1lcJ36PuxRFRoxl7Bd1xPr6g/cNJmmA382B
Kj1vDpsCTD8KChXll3Bq5XxUgzWQe3kjOdem0aVqKdAu2Or4n9tiT3l8EFGV8TXGeyNoesqhT73TU35Y4fcVx7Jfbx+mmtrsEp8tydoT6pwOMlgE/Nz4Pw/Khf94twnpWbXkB12NWfH/ ceph@bvmld-osadm-101.ciberterminal.net' >> /home/ceph/.ssh/authorized_keys"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "chown ceph. -R /home/ceph/.ssh/"
salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "chmod 700 /home/ceph/.ssh/"

check:

ceph@bvmld-osadm-101 ~ $ ssh bvmld-osd-101
The authenticity of host 'bvmld-osd-101 (10.90.49.102)' can't be established.
ECDSA key fingerprint is SHA256:XDHqzuh/Ta+xaBkze6yS043LlKRhRhIfOzNHjrwrEpc.
ECDSA key fingerprint is MD5:a3:39:f3:02:1b:67:40:87:f6:82:93:ff:af:68:4d:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bvmld-osd-101,10.90.49.102' (ECDSA) to the list of known hosts.
ceph@bvmld-osd-101 ~ $ logout
Connection to bvmld-osd-101 closed.
ceph@bvmld-osadm-101 ~ $ ssh bvmld-osd-102
The authenticity of host 'bvmld-osd-102 (10.90.49.103)' can't be established.
ECDSA key fingerprint is SHA256:XDHqzuh/Ta+xaBkze6yS043LlKRhRhIfOzNHjrwrEpc.
ECDSA key fingerprint is MD5:a3:39:f3:02:1b:67:40:87:f6:82:93:ff:af:68:4d:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bvmld-osd-102,10.90.49.103' (ECDSA) to the list of known hosts.
ceph@bvmld-osd-102 ~ $ logout
Connection to bvmld-osd-102 closed.
ceph@bvmld-osadm-101 ~ $ ssh bvmld-osd-103
The authenticity of host 'bvmld-osd-103 (10.90.49.104)' can't be established.
ECDSA key fingerprint is SHA256:XDHqzuh/Ta+xaBkze6yS043LlKRhRhIfOzNHjrwrEpc.
ECDSA key fingerprint is MD5:a3:39:f3:02:1b:67:40:87:f6:82:93:ff:af:68:4d:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bvmld-osd-103,10.90.49.104' (ECDSA) to the list of known hosts.
ceph@bvmld-osd-103 ~ $ logout
Connection to bvmld-osd-103 closed.
ceph@bvmld-osadm-101 ~ $ ssh bvmld-osm-101
The authenticity of host 'bvmld-osm-101 (10.90.49.105)' can't be established.
ECDSA key fingerprint is SHA256:XDHqzuh/Ta+xaBkze6yS043LlKRhRhIfOzNHjrwrEpc.
ECDSA key fingerprint is MD5:a3:39:f3:02:1b:67:40:87:f6:82:93:ff:af:68:4d:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bvmld-osm-101,10.90.49.105' (ECDSA) to the list of known hosts.
ceph@bvmld-osm-101 ~ $ logout
Connection to bvmld-osm-101 closed.
ceph@bvmld-osadm-101 ~ $ ssh bvmld-osadm-101
The authenticity of host 'bvmld-osadm-101 (10.90.49.101)' can't be established.
ECDSA key fingerprint is SHA256:XDHqzuh/Ta+xaBkze6yS043LlKRhRhIfOzNHjrwrEpc.
ECDSA key fingerprint is MD5:a3:39:f3:02:1b:67:40:87:f6:82:93:ff:af:68:4d:10.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'bvmld-osadm-101,10.90.49.101' (ECDSA) to the list of known hosts.
Last login: Wed Oct 24 11:58:13 2018
ceph@bvmld-osadm-101 ~ $ logout
Connection to bvmld-osadm-101 closed.
ceph@bvmld-osadm-101 ~ $ 

Deploying ceph, adm node:

su - ceph
mkdir ~/ceph-deploy
cd ~/ceph-deploy
 
ceph-deploy install bvmld-osadm-101 bvmld-osm-101 bvmld-osd-101 bvmld-osd-102 bvmld-osd-103
 
##########################
# monitoring server
##########################
THESERVER="bvmld-osm-101"
ceph-deploy new ${THESERVER}
ceph-deploy mon create-initial
ceph-deploy gatherkeys ${THESERVER}
 
##########################
# STORAGE SERVERS
##########################
#check
for i in 1 2 3 ; do 
    THESERVER="bvmld-osd-10${i}"
    echo "${THESERVER}"
    ceph-deploy disk list "${THESERVER}"
done
 
# deploy storage nodes
for i in 1 2 3 ; do 
    THESERVER="bvmld-osd-10${i}"
    echo "${THESERVER}"
    ceph-deploy disk zap ${THESERVER}:/dev/sdb
    echo "press enter to continue"
    read
    ceph-deploy osd prepare ${THESERVER}:/dev/sdb
    echo "press enter to continue"
    read
    ceph-deploy osd activate ${THESERVER}:/dev/sdb1
    echo "press enter to continue"
    read
    ceph-deploy disk list ${THESERVER}
    echo "press enter to continue"
    read
done
 
##########################
# Deploy admin on all the nodes
 
ceph-deploy admin bvmld-osadm-101 bvmld-osm-101 bvmld-osd-101 bvmld-osd-102 bvmld-osd-103

on salt-master

avmlm-salt-001 ~ :( # salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "ls -l /etc/ceph/ceph.client.admin.keyring"
bvmld-osm-101.ciberterminal.net:
    -rw------- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osadm-101.ciberterminal.net:
    -rw------- 1 root root 63 Oct 24 12:45 /etc/ceph/ceph.client.admin.keyring
bvmld-osd-102.ciberterminal.net:
    -rw------- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osd-103.ciberterminal.net:
    -rw------- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osd-101.ciberterminal.net:
    -rw------- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
avmlm-salt-001 ~ # salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "chmod +r /etc/ceph/ceph.client.admin.keyring"
bvmld-osd-103.ciberterminal.net:
bvmld-osm-101.ciberterminal.net:
bvmld-osd-102.ciberterminal.net:
bvmld-osadm-101.ciberterminal.net:
bvmld-osd-101.ciberterminal.net:
avmlm-salt-001 ~ # salt 'bvmld-os*-10[1-3].ciberterminal.net' cmd.run "ls -l /etc/ceph/ceph.client.admin.keyring"
bvmld-osd-102.ciberterminal.net:
    -rw-r--r-- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osd-103.ciberterminal.net:
    -rw-r--r-- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osm-101.ciberterminal.net:
    -rw-r--r-- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osd-101.ciberterminal.net:
    -rw-r--r-- 1 root root 63 Oct 24 12:37 /etc/ceph/ceph.client.admin.keyring
bvmld-osadm-101.ciberterminal.net:
    -rw-r--r-- 1 root root 63 Oct 24 12:45 /etc/ceph/ceph.client.admin.keyring
avmlm-salt-001 ~ # 

check health

ceph@bvmld-osadm-101 ~/ceph-deploy $ ssh bvmld-osm-101
Last login: Wed Oct 24 12:38:53 2018 from 10.90.49.101
ceph@bvmld-osm-101 ~ $ ceph
ceph> health
HEALTH_OK
 
ceph> quit
ceph@bvmld-osm-101 ~ $ logout
Connection to bvmld-osm-101 closed.
ceph@bvmld-osadm-101 ~/ceph-deploy :( $ ceph -s
    cluster 9d072cad-dbef-46f4-8bf8-02e4e448e7f5
     health HEALTH_OK
     monmap e1: 1 mons at {bvmld-osm-101=10.90.49.105:6789/0}
            election epoch 2, quorum 0 bvmld-osm-101
     osdmap e14: 3 osds: 3 up, 3 in
      pgmap v26: 64 pgs, 1 pools, 0 bytes data, 0 objects
            102232 kB used, 76659 MB / 76759 MB avail
                  64 active+clean

rados gw

Doc:

cloud-init:

#RADOS GW
export THENETWORK=N_10.90.48.0-20_V948_devDB
export THEHOSTNAME=bvmld-osgw-101
cat  cloud-init.TMPL.yaml| sed "s,THEHOSTNAME,${THEHOSTNAME,,},g" |sed "s,THEIPADDRESS,10.90.49.106,g" | sed "s,THENETMASK,20,g" | sed "s,THEGATEWAY,10.90.48.1,g" > CEPH_cluster/cloud-init.${THEHOSTNAME^^}.yaml
 
cat >>OSGW.sh<<EOF
THENETWORK=N_10.90.48.0-20_V948_devDB
 
HOSTLIST="BVMLD-OSGW-101"
for THEHOSTNAME in ${HOSTLIST} ; do
	acli uhura.vm.clone_with_customize ${THEHOSTNAME^^} clone_from_vm=TMPL-CentOS7.1804_v002 cloudinit_userdata_path=file:///home/nutanix/CEPH_cluster/cloud-init.${THEHOSTNAME^^}.yaml container="Container01"
	acli vm.nic_delete ${THEHOSTNAME^^} $(acli vm.nic_list ${THEHOSTNAME^^}| egrep -v "^Mac Address"|awk '{print $1}')
	acli vm.nic_create ${THEHOSTNAME^^} network="${THENETWORK}"
	acli vm.on ${THEHOSTNAME^^}
done
EOF
 
bash OSGW.sh

Apply salt states.

Deploy ssh-keys:

salt 'bvmld-osgw-101.ciberterminal.net' user.add ceph 1002
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "id ceph"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "echo 'ceph ALL = (root) NOPASSWD:ALL' >>/etc/sudoers"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "tail -2 /etc/sudoers"
 
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "mkdir /home/ceph/.ssh"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "chown ceph. -R /home/ceph/.ssh"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "chmod 700 /home/ceph/.ssh"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCjlbhQsRT5KtTcNvtqQf928sNuCDtOTYxTODKuyQK5jDjOwwMuAG8PlkFQiAsdRbY0vmf+qvpPNWIS7YUAjQrUXqCJXZt9aRJPBG2LglXDCjxzqURnc8wHqGb3XOnmmk4hv0Ke9eJxZz2xLr7FFSvGGouaq8lrN5FP3LISI1lcJ36PuxRFRoxl7Bd1xPr6g/cNJmmA382BKj1vDpsCTD8KChXll3Bq5XxUgzWQe3kjOdem0aVqKdAu2Or4n9tiT3l8EFGV8TXGeyNoesqhT73TU35Y4fcVx7Jfbx+mmtrsEp8tydoT6pwOMlgE/Nz4Pw/Khf94twnpWbXkB12NWfH/ ceph@bvmld-osadm-101.ciberterminal.net' >> /home/ceph/.ssh/authorized_keys"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "chown ceph. -R /home/ceph/.ssh/"
salt 'bvmld-osgw-101.ciberterminal.net' cmd.run "chmod 700 /home/ceph/.ssh/"

Deploy rados from ceph adm:

THEHOSTNAME="bvmld-osgw-101"
 
ceph-deploy install --rgw ${THEHOSTNAME} --repo-url https://download.ceph.com/rpm-hammer/el7/
ceph-deploy admin ${THEHOSTNAME}
ssh ${THEHOSTNAME}
sudo su
yum -y install ceph-radosgw
exit
exit
ceph-deploy rgw create ${THEHOSTNAME}

Make 80 as the default port:

cat >>ceph.conf<<EOF
[client.rgw.bvmld-osgw-101]
rgw_frontends = "civetweb port=80"
rgw_thread_pool_size = 100
EOF
ceph-deploy --overwrite-conf config push ${THEHOSTNAME}

In the gw node:

systemctl restart ceph-radosgw
systemctl status ceph-radosgw

check (gw node):

bvmld-osgw-101 /home/bofher # radosgw-admin zone get
{
    "domain_root": ".rgw",
    "control_pool": ".rgw.control",
    "gc_pool": ".rgw.gc",
    "log_pool": ".log",
    "intent_log_pool": ".intent-log",
    "usage_log_pool": ".usage",
    "user_keys_pool": ".users",
    "user_email_pool": ".users.email",
    "user_swift_pool": ".users.swift",
    "user_uid_pool": ".users.uid",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": ".rgw.buckets.index",
                "data_pool": ".rgw.buckets",
                "data_extra_pool": ".rgw.buckets.extra"
            }
        }
    ]
}

RAW COMMANDS (PRO)

salt-master:

create_HYPERCEPH.sh
for i in 01 03 05 07 09 11 13 15 17 19 ; do bash CloneWars.sh -c datacenter01 -h AVMLP-OSD-0${i} -i 10.20.54.1${i} -d 2048GB -m 20 -O -r 4096 -v 2 -o 2 ; done
for i in 02 04 06 08 10 12 14 16 18 20 ; do  bash CloneWars.sh -c datacenter02 -h AVMLP-OSD-0${i} -i 10.20.54.1${i} -d 2048GB -m 20 -O -r 4096 -v 2 -o 2 ; done
for i in 01 03 ; do  bash CloneWars.sh -c datacenter01 -h AVMLP-OSM-0${i} -i 10.20.54.$((${i}+50)) -d 50GB -m 20 -O -r 4096 -v 2 -o 2 ; done
for i in 02 04 ; do  bash CloneWars.sh -c datacenter02 -h AVMLP-OSM-0${i} -i 10.20.54.$((${i}+50)) -d 50GB -m 20 -O -r 4096 -v 2 -o 2 ; done
for i in 01 03 ; do  bash CloneWars.sh -c datacenter01 -h AVMLP-OSGW-0${i} -i 10.20.54.$((${i}+10)) -d 50GB -m 20 -O -r 4096 -v 2 -o 2 ; done
for i in 02 04 ; do  bash CloneWars.sh -c datacenter02 -h AVMLP-OSGW-0${i} -i 10.20.54.$((${i}+10)) -d 50GB -m 20 -O -r 4096 -v 2 -o 2 ; done
bash create_HYPERCEPH.sh

ENJOY!
check keys:

Unaccepted Keys:
avmlp-osd-001.ciberterminal.net
avmlp-osd-002.ciberterminal.net
avmlp-osd-003.ciberterminal.net
avmlp-osd-004.ciberterminal.net
avmlp-osd-005.ciberterminal.net
avmlp-osd-006.ciberterminal.net
avmlp-osd-007.ciberterminal.net
avmlp-osd-008.ciberterminal.net
avmlp-osd-009.ciberterminal.net
avmlp-osd-010.ciberterminal.net
avmlp-osd-011.ciberterminal.net
avmlp-osd-012.ciberterminal.net
avmlp-osd-013.ciberterminal.net
avmlp-osd-014.ciberterminal.net
avmlp-osd-015.ciberterminal.net
avmlp-osd-016.ciberterminal.net
avmlp-osd-017.ciberterminal.net
avmlp-osd-018.ciberterminal.net
avmlp-osd-019.ciberterminal.net
avmlp-osd-020.ciberterminal.net
avmlp-osgw-001.ciberterminal.net
avmlp-osgw-002.ciberterminal.net
avmlp-osgw-003.ciberterminal.net
avmlp-osgw-004.ciberterminal.net
avmlp-osm-001.ciberterminal.net
avmlp-osm-002.ciberterminal.net
avmlp-osm-003.ciberterminal.net
avmlp-osm-004.ciberterminal.net

Accept all:

avmlm-salt-001 /home/bofher/scripts/ceph :( # salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
avmlp-osd-001.ciberterminal.net
avmlp-osd-002.ciberterminal.net
avmlp-osd-003.ciberterminal.net
avmlp-osd-004.ciberterminal.net
avmlp-osd-005.ciberterminal.net
avmlp-osd-006.ciberterminal.net
avmlp-osd-007.ciberterminal.net
avmlp-osd-008.ciberterminal.net
avmlp-osd-009.ciberterminal.net
avmlp-osd-010.ciberterminal.net
avmlp-osd-011.ciberterminal.net
avmlp-osd-012.ciberterminal.net
avmlp-osd-013.ciberterminal.net
avmlp-osd-014.ciberterminal.net
avmlp-osd-015.ciberterminal.net
avmlp-osd-016.ciberterminal.net
avmlp-osd-017.ciberterminal.net
avmlp-osd-018.ciberterminal.net
avmlp-osd-019.ciberterminal.net
avmlp-osd-020.ciberterminal.net
avmlp-osgw-001.ciberterminal.net
avmlp-osgw-002.ciberterminal.net
avmlp-osgw-003.ciberterminal.net
avmlp-osgw-004.ciberterminal.net
avmlp-osm-001.ciberterminal.net
avmlp-osm-002.ciberterminal.net
avmlp-osm-003.ciberterminal.net
avmlp-osm-004.ciberterminal.net
Proceed? [n/Y] Y
Key for minion avmlp-osd-001.ciberterminal.net accepted.
Key for minion avmlp-osd-002.ciberterminal.net accepted.
Key for minion avmlp-osd-004.ciberterminal.net accepted.
Key for minion avmlp-osd-005.ciberterminal.net accepted.
Key for minion avmlp-osd-006.ciberterminal.net accepted.
Key for minion avmlp-osd-007.ciberterminal.net accepted.
Key for minion avmlp-osd-008.ciberterminal.net accepted.
Key for minion avmlp-osd-009.ciberterminal.net accepted.
Key for minion avmlp-osd-010.ciberterminal.net accepted.
Key for minion avmlp-osd-011.ciberterminal.net accepted.
Key for minion avmlp-osd-012.ciberterminal.net accepted.
Key for minion avmlp-osd-013.ciberterminal.net accepted.
Key for minion avmlp-osd-014.ciberterminal.net accepted.
Key for minion avmlp-osd-015.ciberterminal.net accepted.
Key for minion avmlp-osd-016.ciberterminal.net accepted.
Key for minion avmlp-osd-017.ciberterminal.net accepted.
Key for minion avmlp-osd-018.ciberterminal.net accepted.
Key for minion avmlp-osd-019.ciberterminal.net accepted.
Key for minion avmlp-osd-020.ciberterminal.net accepted.
Key for minion avmlp-osd-001.ciberterminal.net accepted.
Key for minion avmlp-osd-002.ciberterminal.net accepted.
Key for minion avmlp-osd-003.ciberterminal.net accepted.
Key for minion avmlp-osd-004.ciberterminal.net accepted.
Key for minion avmlp-osd-005.ciberterminal.net accepted.
Key for minion avmlp-osd-006.ciberterminal.net accepted.
Key for minion avmlp-osd-007.ciberterminal.net accepted.
Key for minion avmlp-osd-008.ciberterminal.net accepted.
Key for minion avmlp-osd-009.ciberterminal.net accepted.
Key for minion avmlp-osd-010.ciberterminal.net accepted.
Key for minion avmlp-osd-011.ciberterminal.net accepted.
Key for minion avmlp-osd-012.ciberterminal.net accepted.
Key for minion avmlp-osd-013.ciberterminal.net accepted.
Key for minion avmlp-osd-014.ciberterminal.net accepted.
Key for minion avmlp-osd-015.ciberterminal.net accepted.
Key for minion avmlp-osd-016.ciberterminal.net accepted.
Key for minion avmlp-osd-017.ciberterminal.net accepted.
Key for minion avmlp-osd-018.ciberterminal.net accepted.
Key for minion avmlp-osd-019.ciberterminal.net accepted.
Key for minion avmlp-osd-020.ciberterminal.net accepted.
Key for minion avmlp-osgw-001.ciberterminal.net accepted.
Key for minion avmlp-osgw-002.ciberterminal.net accepted.
Key for minion avmlp-osgw-003.ciberterminal.net accepted.
Key for minion avmlp-osgw-004.ciberterminal.net accepted.
Key for minion avmlp-osm-001.ciberterminal.net accepted.
Key for minion avmlp-osm-002.ciberterminal.net accepted.
Key for minion avmlp-osm-003.ciberterminal.net accepted.
Key for minion avmlp-osm-004.ciberterminal.net accepted.
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # export THESERVER="avmlp-os*-0*"
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" test.ping
avmlp-osd-001.ciberterminal.net:
    True
avmlp-osgw-002.ciberterminal.net:
    True
avmlp-osm-002.ciberterminal.net:
    True
avmlp-osm-004.ciberterminal.net:
    True
avmlp-osgw-001.ciberterminal.net:
    True
avmlp-osm-003.ciberterminal.net:
    True
avmlp-osm-001.ciberterminal.net:
    True
avmlp-osgw-004.ciberterminal.net:
    True
avmlp-osgw-003.ciberterminal.net:
    True
avmlp-osd-010.ciberterminal.net:
    True
avmlp-osd-006.ciberterminal.net:
    True
avmlp-osd-008.ciberterminal.net:
    True
avmlp-osd-016.ciberterminal.net:
    True
avmlp-osd-012.ciberterminal.net:
    True
avmlp-osd-014.ciberterminal.net:
    True
avmlp-osd-002.ciberterminal.net:
    True
avmlp-osd-003.ciberterminal.net:
    True
avmlp-osd-004.ciberterminal.net:
    True
avmlp-osd-020.ciberterminal.net:
    True
avmlp-osd-018.ciberterminal.net:
    True
avmlp-osd-009.ciberterminal.net:
    True
avmlp-osd-019.ciberterminal.net:
    True
avmlp-osd-011.ciberterminal.net:
    True
avmlp-osd-007.ciberterminal.net:
    True
avmlp-osd-017.ciberterminal.net:
    True
avmlp-osd-013.ciberterminal.net:
    True
avmlp-osd-005.ciberterminal.net:
    True
avmlp-osd-015.ciberterminal.net:
    True
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" state.apply
...
too much stuff here xD
...
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" state.apply nsupdate
...
too much stuff here xD
...

Check nsupdate:

cat> ping_all.sh<<EOF
for i in 01 03 05 07 09 11 13 15 17 19 ; do ping -w1 -c1 AVMLP-OSD-0${i} >/dev/null && echo "AVMLP-OSD-0${i} OK" || echo "FAIL -------> AVMLP-OSD-0${i}" ; done
for i in 02 04 06 08 10 12 14 16 18 20 ; do ping -w1 -c1 AVMLP-OSD-0${i} >/dev/null && echo "AVMLP-OSD-0${i} OK" || echo "FAIL -------> AVMLP-OSD-0${i}"  ; done
for i in 01 03 ; do ping -w1 -c1 AVMLP-OSM-0${i} >/dev/null && echo "AVMLP-OSM-0${i} OK" || echo "FAIL -------> AVMLP-OSM-0${i}" ; done
for i in 02 04 ; do ping -w1 -c1 AVMLP-OSM-0${i} >/dev/null && echo "AVMLP-OSM-0${i} OK" || echo "FAIL -------> AVMLP-OSM-0${i}" ; done
for i in 01 03 ; do ping -w1 -c1 AVMLP-OSGW-0${i} >/dev/null && echo "AVMLP-OSGW-0${i} OK" || echo "FAIL -------> AVMLP-OSGW-0${i}" ; done
for i in 02 04 ; do ping -w1 -c1 AVMLP-OSGW-0${i} >/dev/null && echo "AVMLP-OSGW-0${i} OK" || echo "FAIL -------> AVMLP-OSGW-0${i}" ; done
EOF
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # bash ping_all.sh
AVMLP-OSD-001 OK
AVMLP-OSD-003 OK
AVMLP-OSD-005 OK
AVMLP-OSD-007 OK
AVMLP-OSD-009 OK
AVMLP-OSD-011 OK
AVMLP-OSD-013 OK
AVMLP-OSD-015 OK
AVMLP-OSD-017 OK
AVMLP-OSD-019 OK
AVMLP-OSD-002 OK
AVMLP-OSD-004 OK
AVMLP-OSD-006 OK
AVMLP-OSD-008 OK
AVMLP-OSD-010 OK
AVMLP-OSD-012 OK
AVMLP-OSD-014 OK
AVMLP-OSD-016 OK
AVMLP-OSD-018 OK
AVMLP-OSD-020 OK
AVMLP-OSM-001 OK
AVMLP-OSM-003 OK
AVMLP-OSM-002 OK
AVMLP-OSM-004 OK
AVMLP-OSGW-001 OK
AVMLP-OSGW-003 OK
AVMLP-OSGW-002 OK
AVMLP-OSGW-004 OK
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" pkg.install yum-plugin-priorities
...
too much stuff here xD
...

Installing ceph-deploy in all mon's

avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # export THESERVER="avmlp-osm-0*"
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" pkg.install ceph-deploy

ceph-user

avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # export THESERVER="avmlp-os*-0*"
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" user.add ceph 1002
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" file.write /etc/sudoers.d/ceph \
> "ceph ALL = (root) NOPASSWD:ALL"
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os :( # salt "${THESERVER}" cmd.run \
>     "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa" \
>     runas=ceph
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^avmlp" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_avmlp-os.txt
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_avmlp-os.txt 
avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # salt "${THESERVER}" file.grep /home/ceph/.ssh/authorized_keys avmlp

I'll use avmlp-osm-001 for cluster deployment!

avmlp-osm-001 /home/bofher # su - ceph
Last login: Fri Jun  7 11:54:02 CEST 2019
ceph@avmlp-osm-001 ~ $ 
ceph@avmlp-osm-001 ~ $ mkdir ~/ceph-deploy
ceph@avmlp-osm-001 ~ $ cd ~/ceph-deploy

Accepting all the keys:

ceph@avmlp-osm-001 ~/ceph-deploy $ export ALLSERVERS="avmlp-osd-011.ciberterminal.net avmlp-osd-008.ciberterminal.net avmlp-osd-010.ciberterminal.net avmlp-osd-004.ciberterminal.net avmlp-osd-012.ciberterminal.net avmlp-osd-003.ciberterminal.net avmlp-osd-017.ciberterminal.net avmlp-osd-007.ciberterminal.net avmlp-osd-018.ciberterminal.net avmlp-osd-005.ciberterminal.net avmlp-osd-009.ciberterminal.net avmlp-osd-001.ciberterminal.net avmlp-osgw-002.ciberterminal.net avmlp-osd-014.ciberterminal.net avmlp-osd-019.ciberterminal.net avmlp-osm-002.ciberterminal.net avmlp-osd-002.ciberterminal.net avmlp-osd-015.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osd-016.ciberterminal.net avmlp-osgw-001.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osd-013.ciberterminal.net avmlp-osgw-003.ciberterminal.net avmlp-osd-020.ciberterminal.net avmlp-osd-006.ciberterminal.net avmlp-osm-003.ciberterminal.net avmlp-osgw-004.ciberterminal.net"
ceph@avmlp-osm-001 ~/ceph-deploy $ for i in ${ALLSERVERS} ; do ssh -o CheckHostIp=no -o ConnectTimeout=15 -o StrictHostKeyChecking=no $i hostname; done

Enabling nautilus:

ceph@avmlp-osm-001 ~/ceph-deploy $ cat << EOM > ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
for i in ${ALLSERVERS} ; do scp ceph.repo ${i}:/home/ceph/ ; ssh ${i} "sudo mv /home/ceph/ceph.repo /etc/yum.repos.d/"  ; done
for i in ${ALLSERVERS} ; do ssh ${i} "sudo chown root. /etc/yum.repos.d/ceph.repo"  ; done
for i in ${ALLSERVERS} ; do ssh ${i} "sudo ls -l /etc/yum.repos.d/ceph.repo"  ; done

Upgrading repo infor from salt-master:

salt "${THESERVER}" pkg.update

This will upgrade ceph-deploy to v2 (using nautilus), back on OSM:

ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy --version
2.0.1

Testing installation on the OSM-001 reports that it will install “mimic” instead of “nautilus”… that's fine, is not the latest, but the previous…

[avmlp-osm-001.ciberterminal.net][DEBUG ] Complete!
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo ceph --version
[avmlp-osm-001.ciberterminal.net][DEBUG ] ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)

Installing on the rest of the servers:

ceph@avmlp-osm-001 ~/ceph-deploy $ export LISTOFSERVERS="avmlp-osd-006.ciberterminal.net avmlp-osd-011.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osd-019.ciberterminal.net avmlp-osd-009.ciberterminal.net avmlp-osd-005.ciberterminal.net avmlp-osd-003.ciberterminal.net avmlp-osm-003.ciberterminal.net avmlp-osd-012.ciberterminal.net avmlp-osd-010.ciberterminal.net avmlp-osd-002.ciberterminal.net avmlp-osd-007.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osd-004.ciberterminal.net avmlp-osm-002.ciberterminal.net avmlp-osd-017.ciberterminal.net avmlp-osd-014.ciberterminal.net avmlp-osd-020.ciberterminal.net avmlp-osd-013.ciberterminal.net avmlp-osd-015.ciberterminal.net avmlp-osd-018.ciberterminal.net avmlp-osd-016.ciberterminal.net avmlp-osd-008.ciberterminal.net avmlp-osd-008.ciberterminal.net"
ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy install ${LISTOFSERVERS}
Actually taking a beer...

All nodes deployed successfully!! (checked).
Deploying mon's:

ceph@avmlp-osm-001 ~/ceph-deploy $ export MONSERVERS="avmlp-osm-002.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osm-003.ciberterminal.net"
ceph@avmlp-osm-001 ~/ceph-deploy $ export MONSERVERS="avmlp-osm-002.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osm-003.ciberterminal.net"^C
ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy new ${LISTOFSERVERS}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new avmlp-osm-002.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osm-003.ciberterminal.net
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f6f6dc33e60>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6f6dc5cd88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['avmlp-osm-002.ciberterminal.net', 'avmlp-osm-001.ciberterminal.net', 'avmlp-osm-004.ciberterminal.net', 'avmlp-osm-003.ciberterminal.net']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[avmlp-osm-002.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: ssh -CT -o BatchMode=yes avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-002.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-002.ciberterminal.net 
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-002.ciberterminal.net][DEBUG ] find the location of an executable
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip link show
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip addr show
[avmlp-osm-002.ciberterminal.net][DEBUG ] IP addresses found: [u'10.20.54.52']
[ceph_deploy.new][DEBUG ] Resolving host avmlp-osm-002.ciberterminal.net
[ceph_deploy.new][DEBUG ] Monitor avmlp-osm-002 at 10.20.54.52
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[avmlp-osm-001.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-001.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-001.ciberterminal.net][DEBUG ] find the location of an executable
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip link show
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip addr show
[avmlp-osm-001.ciberterminal.net][DEBUG ] IP addresses found: [u'10.20.54.51']
[ceph_deploy.new][DEBUG ] Resolving host avmlp-osm-001.ciberterminal.net
[ceph_deploy.new][DEBUG ] Monitor avmlp-osm-001 at 10.20.54.51
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[avmlp-osm-004.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-004.ciberterminal.net][INFO  ] Running command: ssh -CT -o BatchMode=yes avmlp-osm-004.ciberterminal.net
[avmlp-osm-004.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-004.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-004.ciberterminal.net 
[avmlp-osm-004.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-004.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-004.ciberterminal.net][DEBUG ] find the location of an executable
[avmlp-osm-004.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip link show
[avmlp-osm-004.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip addr show
[avmlp-osm-004.ciberterminal.net][DEBUG ] IP addresses found: [u'10.20.54.54']
[ceph_deploy.new][DEBUG ] Resolving host avmlp-osm-004.ciberterminal.net
[ceph_deploy.new][DEBUG ] Monitor avmlp-osm-004 at 10.20.54.54
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[avmlp-osm-003.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-003.ciberterminal.net][INFO  ] Running command: ssh -CT -o BatchMode=yes avmlp-osm-003.ciberterminal.net
[avmlp-osm-003.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-003.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-003.ciberterminal.net 
[avmlp-osm-003.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-003.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-003.ciberterminal.net][DEBUG ] find the location of an executable
[avmlp-osm-003.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip link show
[avmlp-osm-003.ciberterminal.net][INFO  ] Running command: sudo /usr/sbin/ip addr show
[avmlp-osm-003.ciberterminal.net][DEBUG ] IP addresses found: [u'10.20.54.53']
[ceph_deploy.new][DEBUG ] Resolving host avmlp-osm-003.ciberterminal.net
[ceph_deploy.new][DEBUG ] Monitor avmlp-osm-003 at 10.20.54.53
[ceph_deploy.new][DEBUG ] Monitor initial members are ['avmlp-osm-002', 'avmlp-osm-001', 'avmlp-osm-004', 'avmlp-osm-003']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.20.54.52', '10.20.54.51', '10.20.54.54', '10.20.54.53']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

Mierda:

ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy mon create-initial
..........
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] avmlp-osm-004
[ceph_deploy.mon][ERROR ] avmlp-osm-001
[ceph_deploy.mon][ERROR ] avmlp-osm-002
[ceph_deploy.mon][ERROR ] avmlp-osm-003

from avmlp-osm-003:

, or missing features 0 7f49f2bb3700  1 mon.avmlp-osm-003@2(electing) e1  peer v1:10.20.54.51:6789/0 release  < min_mon_release 

oh my gosh:

avmlp-osm-001 /var/log/ceph # rpm -qa|egrep ceph
ceph-selinux-13.2.6-0.el7.x86_64
ceph-osd-13.2.6-0.el7.x86_64
libcephfs2-13.2.6-0.el7.x86_64
ceph-mgr-13.2.6-0.el7.x86_64
ceph-13.2.6-0.el7.x86_64
python-cephfs-13.2.6-0.el7.x86_64
ceph-common-13.2.6-0.el7.x86_64
ceph-mds-13.2.6-0.el7.x86_64
ceph-radosgw-13.2.6-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
ceph-base-13.2.6-0.el7.x86_64
ceph-mon-13.2.6-0.el7.x86_64
ceph-release-1-1.el7.noarch
avmlp-osm-002 /home/ceph # rpm -qa|egrep ceph-mon
ceph-mon-13.2.6-0.el7.x86_64
avmlp-osm-003 /var/log/ceph # rpm -qa|egrep ceph-mon
ceph-mon-14.2.1-0.el7.x86_64

There are 2 versions of ceph… I'm removing ALL the installed packages… Re-installing :-(

cat >/etc/yum.repos.d/ceph.repo<<EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-nautilus/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
EOF

Checked, “nautilus” (14.2) installed on all the nodes…

ceph-deploy install ${LISTOFSERVERS}
ceph-deploy new ${LISTOFSERVERS}
ceph-deploy mon create-initial

Deploy successful.
Quorum OK:

{
    "name": "avmlp-osm-001",
    "rank": 0,
    "state": "leader",
    "election_epoch": 133666,
    "quorum": [
        0,
        1,
        2,
        3
    ],
    "quorum_age": 312,
    "features": {
        "required_con": "2449958747315912708",
        "required_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus"
        ],
        "quorum_con": "4611087854031667199",
        "quorum_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus"
        ]
    },
    "outside_quorum": [],
    "extra_probe_peers": [],
    "sync_provider": [],
    "monmap": {
        "epoch": 2,
        "fsid": "aefcf554-f949-4457-a049-0bfb432e40c4",
        "modified": "2019-06-11 11:13:54.467775",
        "created": "2019-06-07 13:18:41.960939",
        "min_mon_release": 14,
        "min_mon_release_name": "nautilus",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "avmlp-osm-001",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v1",
                            "addr": "10.20.54.51:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.20.54.51:6789/0",
                "public_addr": "10.20.54.51:6789/0"
            },
            {
                "rank": 1,
                "name": "avmlp-osm-002",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v1",
                            "addr": "10.20.54.52:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.20.54.52:6789/0",
                "public_addr": "10.20.54.52:6789/0"
            },
            {
                "rank": 2,
                "name": "avmlp-osm-003",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v1",
                            "addr": "10.20.54.53:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.20.54.53:6789/0",
                "public_addr": "10.20.54.53:6789/0"
            },
            {
                "rank": 3,
                "name": "avmlp-osm-004",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v1",
                            "addr": "10.20.54.54:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.20.54.54:6789/0",
                "public_addr": "10.20.54.54:6789/0"
            }
        ]
    },
    "feature_map": {
        "mon": [
            {
                "features": "0x3ffddff8ffacffff",
                "release": "luminous",
                "num": 1
            }
        ]
    }
}

gatherkeys:

for i in ${MONSERVERS} ; do ceph-deploy gatherkeys ${i} ; done

Deploy “admin” in all MON's:

ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy admin ${LISTOFSERVERS}
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy admin avmlp-osm-002.ciberterminal.net avmlp-osm-001.ciberterminal.net avmlp-osm-004.ciberterminal.net avmlp-osm-003.ciberterminal.net
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbe252596c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['avmlp-osm-002.ciberterminal.net', 'avmlp-osm-001.ciberterminal.net', 'avmlp-osm-004.ciberterminal.net', 'avmlp-osm-003.ciberterminal.net']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fbe25afb2a8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-002.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-002.ciberterminal.net 
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-002.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to avmlp-osm-001.ciberterminal.net
[avmlp-osm-001.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-001.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-001.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to avmlp-osm-004.ciberterminal.net
[avmlp-osm-004.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-004.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-004.ciberterminal.net 
[avmlp-osm-004.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-004.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-004.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to avmlp-osm-003.ciberterminal.net
[avmlp-osm-003.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-003.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-003.ciberterminal.net 
[avmlp-osm-003.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-003.ciberterminal.net][DEBUG ] detect machine type
[avmlp-osm-003.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

Deploy manager node:

ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy mgr create avmlp-osm-001.ciberterminal.net
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create avmlp-osm-001.ciberterminal.net
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('avmlp-osm-001.ciberterminal.net', 'avmlp-osm-001.ciberterminal.net')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f260a56fd88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f260a9c11b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts avmlp-osm-001.ciberterminal.net:avmlp-osm-001.ciberterminal.net
[avmlp-osm-001.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-001.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-001.ciberterminal.net 
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-001.ciberterminal.net][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to avmlp-osm-001.ciberterminal.net
[avmlp-osm-001.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[avmlp-osm-001.ciberterminal.net][WARNIN] mgr keyring does not exist yet, creating one
[avmlp-osm-001.ciberterminal.net][DEBUG ] create a keyring file
[avmlp-osm-001.ciberterminal.net][DEBUG ] create path recursively if it doesn't exist
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.avmlp-osm-001.ciberterminal.net mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-avmlp-osm-001.ciberterminal.net/keyring
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo systemctl enable ceph-mgr@avmlp-osm-001.ciberterminal.net
[avmlp-osm-001.ciberterminal.net][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@avmlp-osm-001.ciberterminal.net.service to /usr/lib/systemd/system/ceph-mgr@.service.
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo systemctl start ceph-mgr@avmlp-osm-001.ciberterminal.net
[avmlp-osm-001.ciberterminal.net][INFO  ] Running command: sudo systemctl enable ceph.target
ceph@avmlp-osm-001 ~/ceph-deploy $ ceph-deploy mgr create avmlp-osm-002.ciberterminal.net                                                                                                                                                                                                                                        
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create avmlp-osm-002.ciberterminal.net
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('avmlp-osm-002.ciberterminal.net', 'avmlp-osm-002.ciberterminal.net')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f42930b9d88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f429350b1b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts avmlp-osm-002.ciberterminal.net:avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][DEBUG ] connection detected need for sudo
[avmlp-osm-002.ciberterminal.net][DEBUG ] connected to host: avmlp-osm-002.ciberterminal.net 
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect platform information from remote host
[avmlp-osm-002.ciberterminal.net][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[avmlp-osm-002.ciberterminal.net][WARNIN] mgr keyring does not exist yet, creating one
[avmlp-osm-002.ciberterminal.net][DEBUG ] create a keyring file
[avmlp-osm-002.ciberterminal.net][DEBUG ] create path recursively if it doesn't exist
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.avmlp-osm-002.ciberterminal.net mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-avmlp-osm-002.ciberterminal.net/keyring
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo systemctl enable ceph-mgr@avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@avmlp-osm-002.ciberterminal.net.service to /usr/lib/systemd/system/ceph-mgr@.service.
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo systemctl start ceph-mgr@avmlp-osm-002.ciberterminal.net
[avmlp-osm-002.ciberterminal.net][INFO  ] Running command: sudo systemctl enable ceph.target

Also, I've checked the Ceph version on all OSD's.

avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # for i in $(salt "${THESERVER}" test.ping| egrep "^a"|awk -F\: '{print $1}'| sort) ; do salt "${i}" cmd.run "sudo ceph --version" ; done
avmlp-osd-001.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-002.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-003.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-004.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-005.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-006.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-007.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-008.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-009.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-010.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-011.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-012.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-013.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-014.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-015.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-016.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-017.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-018.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-019.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osd-020.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osm-001.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osm-002.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osm-003.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
avmlp-osm-004.ciberterminal.net:
    ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)

Disk check on OSD's:

ceph@avmlp-osm-001 ~/ceph-deploy :( $ for THESERVER in ${LISTOFSERVERS} ; do     echo "${THESERVER}";     ceph-deploy disk list "${THESERVER}" 2>&1 |egrep "Disk /dev/sdb"; done
avmlp-osd-001.ciberterminal.net
[avmlp-osd-001.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-002.ciberterminal.net
[avmlp-osd-002.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-003.ciberterminal.net
[avmlp-osd-003.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-004.ciberterminal.net
[avmlp-osd-004.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-005.ciberterminal.net
[avmlp-osd-005.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-006.ciberterminal.net
[avmlp-osd-006.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-007.ciberterminal.net
[avmlp-osd-007.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-008.ciberterminal.net
[avmlp-osd-008.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-009.ciberterminal.net
[avmlp-osd-009.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-010.ciberterminal.net
[avmlp-osd-010.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-011.ciberterminal.net
[avmlp-osd-011.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-012.ciberterminal.net
[avmlp-osd-012.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-013.ciberterminal.net
[avmlp-osd-013.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-014.ciberterminal.net
[avmlp-osd-014.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-015.ciberterminal.net
[avmlp-osd-015.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-016.ciberterminal.net
[avmlp-osd-016.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-017.ciberterminal.net
[avmlp-osd-017.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-018.ciberterminal.net
[avmlp-osd-018.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-019.ciberterminal.net
[avmlp-osd-019.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors
avmlp-osd-020.ciberterminal.net
[avmlp-osd-020.ciberterminal.net][INFO  ] Disk /dev/sdb: 2199.0 GB, 2199023255552 bytes, 4294967296 sectors

Creating CRUSH rule:

sudo ceph osd crush set osd.1 1.99899 root=default host=avmlp-osd-002 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.3 1.99899 root=default host=avmlp-osd-004 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.5 1.99899 root=default host=avmlp-osd-006 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.7 1.99899 root=default host=avmlp-osd-008 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.9 1.99899 root=default host=avmlp-osd-010 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.11 1.99899 root=default host=avmlp-osd-012 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.13 1.99899 root=default host=avmlp-osd-014 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.15 1.99899 root=default host=avmlp-osd-016 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.17 1.99899 root=default host=avmlp-osd-018 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.19 1.99899 root=default host=avmlp-osd-020 datacenter=mediacloud rack=datacenter02
sudo ceph osd crush set osd.0 1.99899 root=default host=avmlp-osd-001 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.2 1.99899 root=default host=avmlp-osd-003 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.4 1.99899 root=default host=avmlp-osd-005 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.6 1.99899 root=default host=avmlp-osd-007 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.8 1.99899 root=default host=avmlp-osd-009 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.10 1.99899 root=default host=avmlp-osd-011 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.12 1.99899 root=default host=avmlp-osd-013 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.14 1.99899 root=default host=avmlp-osd-015 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.16 1.99899 root=default host=avmlp-osd-017 datacenter=itconic rack=datacenter01
sudo ceph osd crush set osd.18 1.99899 root=default host=avmlp-osd-019 datacenter=itconic rack=datacenter01

No-One of the options tested give me the “correct” CRUSH map, so I decided to dump, edit a compile it…
Nice work:

avmlp-osm-001 ~ :( # ceph osd crush tree 
ID  CLASS WEIGHT  TYPE NAME                      
 -1             0 root default                   
-44             0     datacenter itconic         
-43             0         rack datacenter01           
 -3       1.99899             host avmlp-osd-001 
  0   hdd 1.99899                 osd.0          
 -7       1.99899             host avmlp-osd-003 
  2   hdd 1.99899                 osd.2          
-11       1.99899             host avmlp-osd-005 
  4   hdd 1.99899                 osd.4          
-15       1.99899             host avmlp-osd-007 
  6   hdd 1.99899                 osd.6          
-19       1.99899             host avmlp-osd-009 
  8   hdd 1.99899                 osd.8          
-23       1.99899             host avmlp-osd-011 
 10   hdd 1.99899                 osd.10         
-27       1.99899             host avmlp-osd-013 
 12   hdd 1.99899                 osd.12         
-31       1.99899             host avmlp-osd-015 
 14   hdd 1.99899                 osd.14         
-35       1.99899             host avmlp-osd-017 
 16   hdd 1.99899                 osd.16         
-39       1.99899             host avmlp-osd-019 
 18   hdd 1.99899                 osd.18         
-48             0     datacenter mediacloud      
-47             0         rack datacenter02          
 -5       1.99899             host avmlp-osd-002 
  1   hdd 1.99899                 osd.1          
 -9       1.99899             host avmlp-osd-004 
  3   hdd 1.99899                 osd.3          
-13       1.99899             host avmlp-osd-006 
  5   hdd 1.99899                 osd.5          
-17       1.99899             host avmlp-osd-008 
  7   hdd 1.99899                 osd.7          
-21       1.99899             host avmlp-osd-010 
  9   hdd 1.99899                 osd.9          
-25       1.99899             host avmlp-osd-012 
 11   hdd 1.99899                 osd.11         
-29       1.99899             host avmlp-osd-014 
 13   hdd 1.99899                 osd.13         
-33       1.99899             host avmlp-osd-016 
 15   hdd 1.99899                 osd.15         
-37       1.99899             host avmlp-osd-018 
 17   hdd 1.99899                 osd.17         
-41       1.99899             host avmlp-osd-020 
 19   hdd 1.99899                 osd.19      

YEAH:
https://dokuwiki.ciberterminal.net/doku.php?id=ceph:modifying_crush_map

Deploying gateways!

export GWSERVERS="avmlp-osgw-004.ciberterminal.net avmlp-osgw-003.ciberterminal.net avmlp-osgw-001.ciberterminal.net avmlp-osgw-002.ciberterminal.net"
export LISTOFSERVERS=${GWSERVERS}
for THESERVER in ${LISTOFSERVERS} ; do ceph-deploy install --rgw ${THESERVER} ; done
for THESERVER in ${LISTOFSERVERS} ; do ceph-deploy rgw create ${THESERVER} ; done

Check version:

avmlm-salt-001 /home/bofher/scripts/ceph/avmlp-os # for i in $(salt "${THESERVER}" test.ping| egrep "^a"|awk -F\: '{print $1}'| sort) ; do salt "${i}" cmd.run "rpm -qa|egrep radosgw" ; done
avmlp-osgw-001.ciberterminal.net:
    ceph-radosgw-14.2.1-0.el7.x86_64
avmlp-osgw-002.ciberterminal.net:
    ceph-radosgw-14.2.1-0.el7.x86_64
avmlp-osgw-003.ciberterminal.net:
    ceph-radosgw-14.2.1-0.el7.x86_64
avmlp-osgw-004.ciberterminal.net:
    ceph-radosgw-14.2.1-0.el7.x86_64

Re-deployed configuration for citeweb.

Fail with keyring :-(

ceph@avmlp-osm-001 ~/ceph-deploy $ for THESERVER in ${LISTOFSERVERS} ; do ssh ${THESERVER} "sudo radosgw-admin zone get" ; done
2019-06-11 16:20:42.800 7f18ee957580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:42.800 7f18ee957580 -1 AuthRegistry(0x557109b74d68) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2019-06-11 16:20:42.801 7f18ee957580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:42.801 7f18ee957580 -1 AuthRegistry(0x7ffdf8e9a7e8) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
failed to fetch mon config (--no-mon-config to skip)
2019-06-11 16:20:43.023 7f4234966580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.023 7f4234966580 -1 AuthRegistry(0x5594ec341d68) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2019-06-11 16:20:43.025 7f4234966580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.025 7f4234966580 -1 AuthRegistry(0x7ffd555fef98) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
failed to fetch mon config (--no-mon-config to skip)
2019-06-11 16:20:43.244 7f9afa36f580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.244 7f9afa36f580 -1 AuthRegistry(0x563b6e0f2d58) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2019-06-11 16:20:43.245 7f9afa36f580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.245 7f9afa36f580 -1 AuthRegistry(0x7ffdffbc62c8) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
failed to fetch mon config (--no-mon-config to skip)
2019-06-11 16:20:43.443 7fae77a17580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.443 7fae77a17580 -1 AuthRegistry(0x56490bcf9d68) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2019-06-11 16:20:43.444 7fae77a17580 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2019-06-11 16:20:43.444 7fae77a17580 -1 AuthRegistry(0x7ffc3dcbbf58) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
failed to fetch mon config (--no-mon-config to skip)

Solved, documentation here.

WE HAVE HYPERCEPH!!!!

Fuck:

ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph health
HEALTH_WARN too few PGs per OSD (4 < min 30)
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 173 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 175 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 176 flags hashpspool stripe_width 0 application rgw
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd crush rule dump
[
    {
        "rule_id": 0,
        "rule_name": "replicated_rule",
        "ruleset": 0,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "host"
            },
            {
                "op": "emit"
            }
        ]
    },
    {
        "rule_id": 1,
        "rule_name": "ciberterminalRule",
        "ruleset": 1,
        "type": 1,
        "min_size": 1,
        "max_size": 10,
        "steps": [
            {
                "op": "take",
                "item": -1,
                "item_name": "default"
            },
            {
                "op": "chooseleaf_firstn",
                "num": 0,
                "type": "datacenter"
            },
            {
                "op": "emit"
            }
        ]
    }
]

So we have at-least 2 problems:

  • to few pg's
  • pool is not on the ciberterminal crush map

Solution?:

  • I'll breate a new pool with the PG's that we need and with ciberterminalRule CRUSH map.
  • I'll also change the options for the other pools.
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool set default.rgw.control crush_rule ciberterminalRule
set pool 2 crush_rule to ciberterminalRule
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 207 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 175 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 176 flags hashpspool stripe_width 0 application rgw
 
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool set default.rgw.meta crush_rule ciberterminalRule
set pool 3 crush_rule to ciberterminalRule
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool set default.rgw.log crush_rule ciberterminalRule
set pool 4 crush_rule to ciberterminalRule
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 207 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 210 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 213 flags hashpspool stripe_width 0 application rgw

OK, let's do the default one.

ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 207 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 210 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 213 flags hashpspool stripe_width 0 application rgw
pool 5 'new.default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 500 pgp_num 500 autoscale_mode warn last_change 218 flags hashpspool stripe_width 0

Well… I found the way to change options for actual pools:

avmlp-osm-001 /etc/ceph :( # for i in $(ceph osd pool ls) ; do  ceph osd pool set ${i} pg_num 500 ; done                                                                                                                                                                               
set pool 2 pg_num to 500
set pool 3 pg_num to 500
Error ERANGE: pool id 4 pg_num 500 size 3 would mean 6000 total pgs, which exceeds max 5000 (mon_max_pg_per_osd 250 * num_in_osds 20)
avmlp-osm-001 /etc/ceph :( # for i in $(ceph osd pool ls|egrep -v root) ; do  ceph osd pool set ${i} pg_num 60 ; done                                                                                                                                                                  
set pool 2 pg_num to 60
set pool 3 pg_num to 60
set pool 4 pg_num to 60
avmlp-osm-001 /etc/ceph # for i in $(ceph osd pool ls|egrep -v root) ; do  ceph osd pool set ${i} pgp_num 60 ; done                                                                                                                                                                    
set pool 2 pgp_num to 60
set pool 3 pgp_num to 60
set pool 4 pgp_num to 60

but I found that the total amount of PG's must be a summary of all the pool's pg_num.
So I change the ceph internal pools to have a pg_num of 60… THat should be enough.
Then I get:

ceph@avmlp-osm-001 ~/ceph-deploy :( $ sudo ceph health                                                                                                                                                                                                                                 
HEALTH_OK
ceph@avmlp-osm-001 ~/ceph-deploy $ sudo ceph osd pool ls detail                                                                                                                                                                                                                        
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 500 pgp_num 500 autoscale_mode warn last_change 231 lfor 0/0/220 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 477 pgp_num 8 pg_num_target 60 pgp_num_target 60 pg_num_pending 476 autoscale_mode warn last_change 331 lfor 0/331/331 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 477 pgp_num 8 pg_num_target 60 pgp_num_target 60 pg_num_pending 476 autoscale_mode warn last_change 331 lfor 0/331/331 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 60 pgp_num 8 pgp_num_target 60 autoscale_mode warn last_change 269 lfor 0/0/243 flags hashpspool stripe_width 0 application rgw

There's still a parameter that need to be changed: replicated size 3

Done:

avmlp-osm-001 /etc/ceph # for i in $(ceph osd pool ls) ; do  ceph osd pool set ${i} size 4 ; done
set pool 1 size to 4
set pool 2 size to 4
set pool 3 size to 4
set pool 4 size to 4
avmlp-osm-001 /etc/ceph # ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 4 min_size 2 crush_rule 1 object_hash rjenkins pg_num 500 pgp_num 500 autoscale_mode warn last_change 688 lfor 0/0/220 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 4 min_size 2 crush_rule 1 object_hash rjenkins pg_num 385 pgp_num 8 pg_num_target 60 pgp_num_target 60 autoscale_mode warn last_change 691 lfor 0/691/688 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 4 min_size 2 crush_rule 1 object_hash rjenkins pg_num 384 pgp_num 8 pg_num_target 60 pgp_num_target 60 autoscale_mode warn last_change 694 lfor 0/694/692 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 4 min_size 2 crush_rule 1 object_hash rjenkins pg_num 60 pgp_num 8 pgp_num_target 60 autoscale_mode warn last_change 691 lfor 0/0/243 flags hashpspool stripe_width 0 application rgw

Re-deploying rgw

I must use –repo-url cause I don't know why but, ceph-deploy was installing gw of version 13

for THESERVER in ${LISTOFSERVERS} ; do ceph-deploy install --rgw ${THESERVER} --repo-url http://download.ceph.com/rpm-nautilus/el7/ ; done

Deploying MDS

for i in 1 3  ; do bash CloneWars.sh -F -c datacenter01 -i 10.20.55.1${i} -v 4 -o 2  -r 4096 -O -m 20 -h AVMLP-OSFS-00${i}  ; done
for i in 2 4  ; do bash CloneWars.sh -F -c datacenter02 -i 10.20.55.1${i} -v 4 -o 2  -r 4096 -O -m 20 -h AVMLP-OSFS-00${i}  ; done
export THESERVER="avmlp-osfs*.ciberterminal.net"
salt "${THESERVER}" state.apply 
salt "${THESERVER}" state.apply nsupdate
salt "${THESERVER}" pkg.install yum-plugin-priorities
salt "${THESERVER}" user.add ceph 1002
salt "${THESERVER}" file.write /etc/sudoers.d/ceph "ceph ALL = (root) NOPASSWD:ALL"
salt "${THESERVER}" cmd.run "cat /etc/sudoers.d/ceph"
salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph
salt "${THESERVER}" cmd.run     "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa"     runas=ceph
salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^a" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt
export THESERVER="avmlp-os*.ciberterminal.net"
salt "${THESERVER}" file.copy /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys
while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_avmlp-os.txt

From osm-001:

for i in ${MDSSERVERS} ; do scp ceph.repo ${i}:/home/ceph/ ; ssh ${i} "sudo mv /home/ceph/ceph.repo /etc/yum.repos.d/"  ; done
export MDSSERVERS="avmlp-osfs-002.ciberterminal.net avmlp-osfs-001.ciberterminal.net avmlp-osfs-004.ciberterminal.net avmlp-osfs-003.ciberterminal.net"
export LISTOFSERVERS=${MDSSERVERS}
ceph-deploy install ${LISTOFSERVERS}
ceph-deploy mds create ${LISTOFSERVERS}

mds information will appear after the creation of the cephfs

export POOL_NAME="cephfs_data-ftp"
ceph osd pool create ${POOL_NAME} 128 128 replicated
ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule	 
ceph osd pool set ${POOL_NAME} compression_algorithm snappy
ceph osd pool set ${POOL_NAME} compression_mode aggressive
ceph osd pool set ${POOL_NAME} compression_min_blob_size 10240
ceph osd pool set ${POOL_NAME} compression_max_blob_size 4194304
ceph osd pool set ${POOL_NAME} pg_autoscale_mode on
export POOL_NAME="cephfs_metadata-ftp"
 
ceph osd pool create ${POOL_NAME} 128 128 replicated
ceph osd pool set ${POOL_NAME} crush_rule ciberterminalRule	
ceph osd pool set ${POOL_NAME} pg_autoscale_mode on
 
ceph fs new cephfs cephfs_metadata-ftp cephfs_data-ftp
ceph fs ls
ceph -s
ceph mds stat

https://ceph.com/planet/cephfs-admin-tips-create-a-new-user-and-share/

ceph/from_scratch.txt · Last modified: 2019/07/18 09:17 (external edit)