\/etc\/ceph\/ceph.client.admin.keyring<\/code>.<\/em><\/li>\n<\/ul>\n\n\n\nAs you can see, cephadm used podman container management tool, podman (\/usr\/bin\/podman) version 4.6.1 is present<\/em><\/strong>, from the bootstrap command output.<\/p>\n\n\n\nYou can see created containers;<\/p>\n\n\n\n
podman ps<\/code><\/pre>\n\n\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nb0001264bf0e quay.io\/ceph\/ceph:v18 -n mon.node01 -f ... 3 minutes ago Up 3 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-mon-node01\n673b69975d89 quay.io\/ceph\/ceph:v18 -n mgr.node01.gvh... 3 minutes ago Up 3 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-mgr-node01-gvhmuc\n4d4739e59495 quay.io\/ceph\/ceph@sha256:4ce43f6b683448acc5d45ef184be9a44a569cf9c0019d6bb21472523aa52873c -n client.ceph-ex... 3 minutes ago Up 3 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-ceph-exporter-node01\n9123d699b6a9 quay.io\/ceph\/ceph@sha256:4ce43f6b683448acc5d45ef184be9a44a569cf9c0019d6bb21472523aa52873c -n client.crash.n... 3 minutes ago Up 3 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-crash-node01\n0be1d3e87064 quay.io\/prometheus\/node-exporter:v1.5.0 --no-collector.ti... 3 minutes ago Up 3 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-node-exporter-node01\n3a6e81492858 quay.io\/prometheus\/prometheus:v2.43.0 --config.file=\/et... 2 minutes ago Up 2 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-prometheus-node01\nc7f4e180775c quay.io\/prometheus\/alertmanager:v0.25.0 --cluster.listen-... 2 minutes ago Up 2 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-alertmanager-node01\n177cc82138c4 quay.io\/ceph\/ceph-grafana:9.4.7 \/bin\/bash 2 minutes ago Up 2 minutes ceph-f14cb896-889a-11ee-abf4-525400ac8730-grafana-node01\n<\/code><\/pre>\n\n\n\nIf you are using Docker, then list the containers using;<\/p>\n\n\n\n
docker ps<\/code><\/pre>\n\n\n\nSimilarly, systemd unit files are also created for these containers;<\/p>\n\n\n\n
systemctl list-units 'ceph*'<\/code><\/pre>\n\n\n\n UNIT LOAD ACTIVE SUB DESCRIPTION \n ceph-f14cb896-889a-11ee-abf4-525400ac8730@alertmanager.node01.service loaded active running Ceph alertmanager.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@ceph-exporter.node01.service loaded active running Ceph ceph-exporter.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@crash.node01.service loaded active running Ceph crash.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@grafana.node01.service loaded active running Ceph grafana.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@mgr.node01.gvhmuc.service loaded active running Ceph mgr.node01.gvhmuc for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@mon.node01.service loaded active running Ceph mon.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@node-exporter.node01.service loaded active running Ceph node-exporter.node01 for f14cb896-889a-11ee-abf4-525400ac8730\n ceph-f14cb896-889a-11ee-abf4-525400ac8730@prometheus.node01.service loaded active running Ceph prometheus.node01 for f14cb896-889a-11ee-abf4-525400ac8730 \n ceph-f14cb896-889a-11ee-abf4-525400ac8730.target loaded active active Ceph cluster f14cb896-889a-11ee-abf4-525400ac8730\n ceph.target loaded active active All Ceph clusters and services\n\nLOAD = Reflects whether the unit definition was properly loaded.\nACTIVE = The high-level unit activation state, i.e. generalization of SUB.\nSUB = The low-level unit activation state, values depend on unit type.\n10 loaded units listed. Pass --all to see loaded but inactive units, too.\nTo show all installed unit files use 'systemctl list-unit-files'.\n<\/code><\/pre>\n\n\n\nEnable Ceph CLI<\/h3>\n\n\n\n
When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI;<\/p>\n\n\n\n
\/usr\/sbin\/cephadm shell \\\n\t--fsid f14cb896-889a-11ee-abf4-525400ac8730 \\\n\t-c \/etc\/ceph\/ceph.conf \\\n\t-k \/etc\/ceph\/ceph.client.admin.keyring\n<\/code><\/pre>\n\n\n\nThis drops you onto Ceph container CLI;<\/p>\n\n\n\n
You can run the ceph commands eg to check the Ceph status;<\/p>\n\n\n\n
ceph -s<\/code><\/pre>\n\n\n\n cluster:\n id: f14cb896-889a-11ee-abf4-525400ac8730\n health: HEALTH_WARN\n OSD count 0 < osd_pool_default_size 3\n \n services:\n mon: 1 daemons, quorum node01 (age 6m)\n mgr: node01.gvhmuc(active, since 4m)\n osd: 0 osds: 0 up, 0 in\n \n data:\n pools: 0 pools, 0 pgs\n objects: 0 objects, 0 B\n usage: 0 B used, 0 B \/ 0 B avail\n pgs: \n<\/code><\/pre>\n\n\n\nYou can exit the ceph CLI by pressing Ctrl+D<\/strong> or type exit<\/strong> and press ENTER.<\/p>\n\n\n\nThere are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.<\/p>\n\n\n\n
cephadm shell -- ceph -s<\/code><\/pre>\n\n\n\nOr You could install Ceph CLI tools on the host;<\/em><\/p>\n\n\n\ncephadm add-repo --release <\/em>reef<\/code><\/pre>\n\n\n\n