Unable to deploy ceph from instructions - block-logical fails

Asked by Karl Kloppenborg

Hi,
I have a three node cluster, each node has two drives multipathd' too it.
/dev/mapper/osd and /dev/mapper/journal

osd-init fails with the following:
-------------------------
controller2:~$ kubectl -n ceph logs pod/ceph-osd-default-83945928-5q6gs osd-init -f
Initializing the osd with ceph-disk
+ echo 'Initializing the osd with ceph-disk'
+ exec /tmp/init-ceph-disk.sh
+ source /tmp/osd-common-ceph-disk.sh
++ set -ex
++ : 'root=default host=controller2'
++ : /var/lib/ceph/osd/ceph
++ : /etc/ceph/ceph.conf
++ : /var/lib/ceph/bootstrap-osd/ceph.keyring
+++ uuidgen
++ : f3300b7c-03fb-4d56-8fd8-f64b992be1a1
+++ awk '/^osd_journal_size/{print $3}' /etc/ceph/ceph.conf.template
++ : 10240
++ : 1.0
+++ cat /etc/ceph/storage.json
+++ python -c 'import sys, json; data = json.load(sys.stdin); print(json.dumps(data["failure_domain"]))'
++ eval 'CRUSH_FAILURE_DOMAIN_TYPE="host"'
+++ CRUSH_FAILURE_DOMAIN_TYPE=host
+++ cat /etc/ceph/storage.json
+++ python -c 'import sys, json; data = json.load(sys.stdin); print(json.dumps(data["failure_domain_name"]))'
++ eval 'CRUSH_FAILURE_DOMAIN_NAME="false"'
+++ CRUSH_FAILURE_DOMAIN_NAME=false
+++ cat /etc/ceph/storage.json
+++ python -c 'import sys, json; data = json.load(sys.stdin); print(json.dumps(data["failure_domain_by_hostname"]))'
++ eval 'CRUSH_FAILURE_DOMAIN_BY_HOSTNAME="false"'
+++ CRUSH_FAILURE_DOMAIN_BY_HOSTNAME=false
+++ cat /etc/ceph/storage.json
+++ python -c 'import sys, json; data = json.load(sys.stdin); print(json.dumps(data["device_class"]))'
++ eval 'DEVICE_CLASS=""'
+++ DEVICE_CLASS=
+++ ceph -v
+++ egrep -q 'nautilus|mimic|luminous'
+++ echo 0
++ [[ 0 -ne 0 ]]
++ '[' -z controller2 ']'
++ [[ ! -e /etc/ceph/ceph.conf.template ]]
+++ kubectl get endpoints ceph-mon-discovery -n ceph -o json
+++ awk '-F"' -v port=6789 -v version=v1 -v msgr_version=v2 -v msgr2_port=3300 '/"ip"/{print "["version":"$4":"port"/"0","msgr_version":"$4":"msgr2_port"/"0"]"}'
+++ paste -sd,
++ ENDPOINT='[v1:172.21.24.1:6789/0,v2:172.21.24.1:3300/0],[v1:172.21.24.2:6789/0,v2:172.21.24.2:3300/0],[v1:172.21.24.3:6789/0,v2:172.21.24.3:3300/0]'
++ [[ [v1:172.21.24.1:6789/0,v2:172.21.24.1:3300/0],[v1:172.21.24.2:6789/0,v2:172.21.24.2:3300/0],[v1:172.21.24.3:6789/0,v2:172.21.24.3:3300/0] == '' ]]
++ /bin/sh -c -e 'cat /etc/ceph/ceph.conf.template | sed '\''s#mon_host.*#mon_host = [v1:172.21.24.1:6789/0,v2:172.21.24.1:3300/0],[v1:172.21.24.2:6789/0,v2:172.21.24.2:3300/0],[v1:172.21.24.3:6789/0,v2:172.21.24.3:3300/0]#g'\'' | tee /etc/ceph/ceph.conf'
[global]
cephx = true
cephx_cluster_require_signatures = true
cephx_require_signatures = false
cephx_service_require_signatures = false
debug_ms = 0/0
fsid = d1e8ee62-1d0e-457d-a418-49d675c76ea1
mon_host = [v1:172.21.24.1:6789/0,v2:172.21.24.1:3300/0],[v1:172.21.24.2:6789/0,v2:172.21.24.2:3300/0],[v1:172.21.24.3:6789/0,v2:172.21.24.3:3300/0]
mon_osd_down_out_interval = 1800
mon_osd_down_out_subtree_limit = root
mon_osd_min_in_ratio = 0
mon_osd_min_up_ratio = 0
objecter_inflight_op_bytes = 1073741824
objecter_inflight_ops = 10240
[osd]
cluster_network = 172.21.24.0/24
filestore_max_sync_interval = 10
filestore_merge_threshold = -10
filestore_split_multiple = 12
ms_bind_port_max = 7100
ms_bind_port_min = 6800
osd_crush_update_on_start = false
osd_deep_scrub_stride = 1048576
osd_journal_size = 10240
osd_max_object_name_len = 256
osd_mkfs_options_xfs = -f -i size=2048
osd_mkfs_type = xfs
osd_mount_options_xfs = rw,noatime,largeio,inode64,swalloc,logbufs=8,logbsize=256k,allocsize=4M
osd_pg_max_concurrent_snap_trims = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
osd_scrub_begin_hour = 22
osd_scrub_chunk_max = 4
osd_scrub_chunk_min = 1
osd_scrub_during_recovery = false
osd_scrub_end_hour = 4
osd_scrub_load_threshold = 10
osd_scrub_priority = 1
osd_scrub_sleep = 0.1
osd_snap_trim_priority = 1
osd_snap_trim_sleep = 0.1
public_network = 172.21.24.0/24
[target]
required_percent_of_osds = 75
+ : 1
+ : 0
+ '[' xblock == xbluestore ']'
+ '[' xblock == xdirectory ']'
++ readlink -f /dev/mapper/osda
+ export OSD_DEVICE=/dev/dm-0
+ OSD_DEVICE=/dev/dm-0
+ '[' xblock-logical == xdirectory ']'
++ readlink -f /dev/mapper/journal
+ export OSD_JOURNAL=/dev/dm-2
+ OSD_JOURNAL=/dev/dm-2
+ '[' xblock == xdirectory ']'
+ osd_disk_prepare
+ [[ -z /dev/dm-0 ]]
+ [[ ! -b /dev/dm-0 ]]
+ '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'
+ timeout 10 ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health
HEALTH_OK
+ parted --script /dev/dm-0 print
++ parted --script /dev/dm-0 print
++ egrep '^ 1.*ceph data'
+ [[ -n '' ]]
+ '[' 0 -eq 1 ']'
+ osd_journal_prepare
+ '[' -n /dev/dm-2 ']'
+ '[' -b /dev/dm-2 ']'
++ readlink -f /dev/dm-2
+ OSD_JOURNAL=/dev/dm-2
++ echo /dev/dm-2
++ sed 's/[^0-9]//g'
+ OSD_JOURNAL_PARTITION=2
++ echo /dev/dm-2
++ sed 's/[0-9]//g'
+ local jdev=/dev/dm-
+ '[' -z 2 ']'
+ OSD_JOURNAL=/dev/dm-2
+ chown ceph. /dev/dm-2
+ CLI_OPTS=' --filestore'
+ CLI_OPTS=' --filestore --journal-uuid f3300b7c-03fb-4d56-8fd8-f64b992be1a1 /dev/dm-0'
+ '[' xblock-logical == xdirectory ']'
+ CLI_OPTS=' --filestore --journal-uuid f3300b7c-03fb-4d56-8fd8-f64b992be1a1 /dev/dm-0 /dev/dm-2'
+ udev_settle
+ partprobe /dev/dm-0
+ '[' 0 -eq 1 ']'
+ '[' xblock-logical == xblock-logical ']'
+ '[' '!' -z /dev/dm-2 ']'
++ readlink -f /dev/dm-2
+ OSD_JOURNAL=/dev/dm-2
+ '[' '!' -z /dev/dm-2 ']'
++ echo /dev/dm-2
++ sed 's/[0-9]//g'
+ local JDEV=/dev/dm-
+ partprobe /dev/dm-
Error: Could not stat device /dev/dm- - No such file or directory.
----------------------------------

pods look like so:

-----------------------------------
controller2:~$ kubectl get pods -n ceph
NAME READY STATUS RESTARTS AGE
ceph-bootstrap-bfvs2 0/1 Completed 0 3m42s
ceph-mds-keyring-generator-zb4mz 0/1 Completed 0 3m42s
ceph-mgr-keyring-generator-dhgk6 0/1 Completed 0 3m42s
ceph-mon-7pg9x 1/1 Running 0 3m42s
ceph-mon-87978 1/1 Running 0 3m42s
ceph-mon-8ns2l 1/1 Running 0 3m42s
ceph-mon-check-6ddfcff495-8cstv 1/1 Running 0 3m42s
ceph-mon-keyring-generator-6pnq6 0/1 Completed 0 3m42s
ceph-osd-default-83945928-5q6gs 0/2 Init:CrashLoopBackOff 3 90s
ceph-osd-default-83945928-8rrcc 0/2 Init:CrashLoopBackOff 3 90s
ceph-osd-default-83945928-dtcs9 0/2 Init:CrashLoopBackOff 3 90s
ceph-osd-keyring-generator-nmcgm 0/1 Completed 0 3m42s
ceph-storage-keys-generator-6jbm6 0/1 Completed 0 3m42s
ingress-744f5d5fb8-l7tmm 1/1 Running 1 28h
ingress-744f5d5fb8-z49fp 1/1 Running 0 28h
ingress-error-pages-94d6c99d5-fkg98 1/1 Running 1 28h
ingress-error-pages-94d6c99d5-rm965 1/1 Running 0 28h

-------------------------------------------

Configuration for ceph is:
--------------------------------------------
endpoints:
  ceph_mon:
    namespace: ceph
network:
  public: 172.21.24.0/24
  cluster: 172.21.24.0/24
deployment:
  storage_secrets: true
  ceph: true
  rbd_provisioner: true
  cephfs_provisioner: false
  client_secrets: false
bootstrap:
  enabled: true
deploy:
  tool: "ceph-disk"
conf:
  ceph:
    global:
      fsid: d1e8ee62-1d0e-457d-a418-49d675c76ea1
  pool:
    crush:
      tunables: null
    target:
      osd: 3
      pg_per_osd: 100
  storage:
    osd:
      - data:
          type: "block-logical"
          location: /dev/mapper/osd
        journal:
          type: "block-logical"
          location: /dev/mapper/journal
storageclass:
  cephfs:
    provision_storage_class: false
manifests:
  deployment_cephfs_provisioner: false
  job_cephfs_client_key: false
------------------------------------------------------------

I have also tried without quotes around block-logical but I get the same result..

Any assistance would be greatly appreciated!

Question information

Language:
English Edit question
Status:
Expired
For:
openstack-helm Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Launchpad Janitor (janitor) said :
#1

This question was expired because it remained in the 'Open' state without activity for the last 15 days.