Here's the test plan for Kinetic:
First, deploy a small ceph cluster. If using Juju, we can use something like:
'juju add-machine --series="kinetic"'
Add the -proposed archives. For every target machine, there must exist the following file with the contents:
$ cat /etc/apt/sources.list.d/ubuntu-kinetic-proposed.list deb http://archive.ubuntu.com/ubuntu kinetic-proposed main multiverse restricted universe
Verify that the version to test is the one that is going to be installed:
$ apt-cache policy ceph ceph: Installed: (none) Candidate: 17.2.6-0ubuntu0.22.10.1 Version table: 17.2.6-0ubuntu0.22.10.1 500 500 http://archive.ubuntu.com/ubuntu kinetic-proposed/main amd64 Packages 17.2.5-0ubuntu0.22.10.3 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu kinetic-updates/main amd64 Packages 500 http://security.ubuntu.com/ubuntu kinetic-security/main amd64 Packages 17.2.0-0ubuntu4 500 500 http://nova.clouds.archive.ubuntu.com/ubuntu kinetic/main amd64 Packages
Once the ceph cluster has been deployed successfully, we can ssh into one of the mons and test the rook module.
First, we verify that the rook module is not yet running:
$ sudo ceph mgr module ls
MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) iostat on nfs on restful on alerts - influx - insights - localpool - mirroring - osd_perf_query - osd_support - prometheus - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix -
Then, we install and enable the module:
$ sudo apt install ceph-mgr-rook $ sudo ceph mgr module enable rook
Verify that the cluster is healthy:
cluster: id: c3ab9238-1f66-11ee-9277-31985965425a health: HEALTH_OK
services: mon: 3 daemons, quorum juju-233a7d-ceph-kinetic-0,juju-233a7d-ceph-kinetic-2,juju-233a7d-ceph-kinetic-1 (age 4m) mgr: juju-233a7d-ceph-kinetic-2(active, since 13s), standbys: juju-233a7d-ceph-kinetic-1, juju-233a7d-ceph-kinetic-0 osd: 3 osds: 3 up, 3 in
data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
Lastly, check that the rook module is up and running:
MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) iostat on nfs on restful on rook on alerts - influx - insights - localpool - mirroring - osd_perf_query - osd_support - prometheus - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix -
Here's the test plan for Kinetic:
First, deploy a small ceph cluster. If using Juju, we can use something like:
'juju add-machine --series="kinetic"'
Add the -proposed archives. For every target machine, there must exist the following file with the contents:
$ cat /etc/apt/ sources. list.d/ ubuntu- kinetic- proposed. list archive. ubuntu. com/ubuntu kinetic-proposed main multiverse restricted universe
deb http://
Verify that the version to test is the one that is going to be installed:
$ apt-cache policy ceph 0ubuntu0. 22.10.1 2.6-0ubuntu0. 22.10.1 500 archive. ubuntu. com/ubuntu kinetic- proposed/ main amd64 Packages 2.5-0ubuntu0. 22.10.3 500 nova.clouds. archive. ubuntu. com/ubuntu kinetic- updates/ main amd64 Packages security. ubuntu. com/ubuntu kinetic- security/ main amd64 Packages 2.0-0ubuntu4 500 nova.clouds. archive. ubuntu. com/ubuntu kinetic/main amd64 Packages
ceph:
Installed: (none)
Candidate: 17.2.6-
Version table:
17.
500 http://
17.
500 http://
500 http://
17.
500 http://
Once the ceph cluster has been deployed successfully, we can ssh into one of the mons and test the rook module.
First, we verify that the rook module is not yet running:
$ sudo ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -
Then, we install and enable the module:
$ sudo apt install ceph-mgr-rook
$ sudo ceph mgr module enable rook
Verify that the cluster is healthy:
cluster: 1f66-11ee- 9277-3198596542 5a
id: c3ab9238-
health: HEALTH_OK
services: ceph-kinetic- 0,juju- 233a7d- ceph-kinetic- 2,juju- 233a7d- ceph-kinetic- 1 (age 4m) ceph-kinetic- 2(active, since 13s), standbys: juju-233a7d- ceph-kinetic- 1, juju-233a7d- ceph-kinetic- 0
mon: 3 daemons, quorum juju-233a7d-
mgr: juju-233a7d-
osd: 3 osds: 3 up, 3 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
Lastly, check that the rook module is up and running:
$ sudo ceph mgr module ls
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
rook on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -