Hello There!
I have a cluster running the proposed packages since they have been released:
$ dpkg -l | grep 17.2.6 | awk '{print $2"\t\t"$3"\t\t"$4}' ceph 17.2.6-0ubuntu0.22.04.1 amd64 ceph-base 17.2.6-0ubuntu0.22.04.1 amd64 ceph-common 17.2.6-0ubuntu0.22.04.1 amd64 ceph-mds 17.2.6-0ubuntu0.22.04.1 amd64 ceph-mgr 17.2.6-0ubuntu0.22.04.1 amd64 ceph-mgr-modules-core 17.2.6-0ubuntu0.22.04.1 all ceph-mon 17.2.6-0ubuntu0.22.04.1 amd64 ceph-osd 17.2.6-0ubuntu0.22.04.1 amd64 ceph-volume 17.2.6-0ubuntu0.22.04.1 all libcephfs2 17.2.6-0ubuntu0.22.04.1 amd64 librados2 17.2.6-0ubuntu0.22.04.1 amd64 libradosstriper1 17.2.6-0ubuntu0.22.04.1 amd64 librbd1 17.2.6-0ubuntu0.22.04.1 amd64 librgw2 17.2.6-0ubuntu0.22.04.1 amd64 libsqlite3-mod-ceph 17.2.6-0ubuntu0.22.04.1 amd64 python3-ceph-argparse 17.2.6-0ubuntu0.22.04.1 amd64 python3-ceph-common 17.2.6-0ubuntu0.22.04.1 all python3-cephfs 17.2.6-0ubuntu0.22.04.1 amd64 python3-rados 17.2.6-0ubuntu0.22.04.1 amd64 python3-rbd 17.2.6-0ubuntu0.22.04.1 amd64 radosgw 17.2.6-0ubuntu0.22.04.1 amd64
$ sudo ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) iostat on nfs on restful on alerts - influx - insights - localpool - mirroring - osd_perf_query - osd_support - prometheus - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix -
$ sudo ceph -s cluster: id: 6c2efd86-7423-11ed-97ec-2f3ef93079f7 health: HEALTH_OK
services: mon: 3 daemons, quorum juju-b096f0-88-lxd-0,juju-b096f0-90-lxd-0,juju-b096f0-92-lxd-0 (age 10h) mgr: juju-b096f0-88-lxd-0(active, since 4d), standbys: juju-b096f0-92-lxd-0, juju-b096f0-90-lxd-0 osd: 8 osds: 8 up (since 2d), 8 in (since 2w)
data: pools: 3 pools, 289 pgs objects: 169.40k objects, 492 GiB usage: 1.5 TiB used, 892 GiB / 2.3 TiB avail pgs: 289 active+clean
Installed ceph-mgr-rook on all Mon units:
$ juju run -a ceph-mon-ssd 'sudo apt-get -y install ceph-mgr-rook'
Check cluster Status:
services: mon: 3 daemons, quorum juju-b096f0-88-lxd-0,juju-b096f0-90-lxd-0,juju-b096f0-92-lxd-0 (age 10h) mgr: juju-b096f0-88-lxd-0(active, since 2m), standbys: juju-b096f0-90-lxd-0, juju-b096f0-92-lxd-0 osd: 8 osds: 8 up (since 2d), 8 in (since 2w)
$ sudo ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) iostat on nfs on restful on alerts - influx - insights - localpool - mirroring - osd_perf_query - osd_support - prometheus - rook - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix -
Please let me know if there is anything else you'd like me to test.
Best, Alan
Hello There!
I have a cluster running the proposed packages since they have been released:
$ dpkg -l | grep 17.2.6 | awk '{print $2"\t\t" $3"\t\t" $4}' 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 modules- core 17.2.6- 0ubuntu0. 22.04.1 all 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 all 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 ceph-argparse 17.2.6- 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 all 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64 0ubuntu0. 22.04.1 amd64
ceph 17.2.6-
ceph-base 17.2.6-
ceph-common 17.2.6-
ceph-mds 17.2.6-
ceph-mgr 17.2.6-
ceph-mgr-
ceph-mon 17.2.6-
ceph-osd 17.2.6-
ceph-volume 17.2.6-
libcephfs2 17.2.6-
librados2 17.2.6-
libradosstriper1 17.2.6-
librbd1 17.2.6-
librgw2 17.2.6-
libsqlite3-mod-ceph 17.2.6-
python3-
python3-ceph-common 17.2.6-
python3-cephfs 17.2.6-
python3-rados 17.2.6-
python3-rbd 17.2.6-
radosgw 17.2.6-
$ sudo ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -
$ sudo ceph -s 7423-11ed- 97ec-2f3ef93079 f7
cluster:
id: 6c2efd86-
health: HEALTH_OK
services: 88-lxd- 0,juju- b096f0- 90-lxd- 0,juju- b096f0- 92-lxd- 0 (age 10h) 88-lxd- 0(active, since 4d), standbys: juju-b096f0- 92-lxd- 0, juju-b096f0- 90-lxd- 0
mon: 3 daemons, quorum juju-b096f0-
mgr: juju-b096f0-
osd: 8 osds: 8 up (since 2d), 8 in (since 2w)
data:
pools: 3 pools, 289 pgs
objects: 169.40k objects, 492 GiB
usage: 1.5 TiB used, 892 GiB / 2.3 TiB avail
pgs: 289 active+clean
Installed ceph-mgr-rook on all Mon units:
$ juju run -a ceph-mon-ssd 'sudo apt-get -y install ceph-mgr-rook'
Check cluster Status:
$ sudo ceph -s 7423-11ed- 97ec-2f3ef93079 f7
cluster:
id: 6c2efd86-
health: HEALTH_OK
services: 88-lxd- 0,juju- b096f0- 90-lxd- 0,juju- b096f0- 92-lxd- 0 (age 10h) 88-lxd- 0(active, since 2m), standbys: juju-b096f0- 90-lxd- 0, juju-b096f0- 92-lxd- 0
mon: 3 daemons, quorum juju-b096f0-
mgr: juju-b096f0-
osd: 8 osds: 8 up (since 2d), 8 in (since 2w)
data:
pools: 3 pools, 289 pgs
objects: 169.40k objects, 492 GiB
usage: 1.5 TiB used, 892 GiB / 2.3 TiB avail
pgs: 289 active+clean
$ sudo ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
rook -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -
Please let me know if there is anything else you'd like me to test.
Best,
Alan