CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nb08131564f70 quay.io\/ceph\/ceph-grafana:9.4.7 \"\/bin\/sh -c 'grafana\u2026\" 6 minutes ago Up 6 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-grafana-node01\n53deac7fb865 quay.io\/prometheus\/alertmanager:v0.25.0 \"\/bin\/alertmanager -\u2026\" 6 minutes ago Up 6 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-alertmanager-node01\nc3817b8f196f quay.io\/prometheus\/prometheus:v2.43.0 \"\/bin\/prometheus --c\u2026\" 6 minutes ago Up 6 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-prometheus-node01\n0e442dd010ab quay.io\/prometheus\/node-exporter:v1.5.0 \"\/bin\/node_exporter \u2026\" 7 minutes ago Up 7 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-node-exporter-node01\n2e18cdccea4d quay.io\/ceph\/ceph \"\/usr\/bin\/ceph-crash\u2026\" 7 minutes ago Up 7 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-crash-node01\n761bb884af10 quay.io\/ceph\/ceph \"\/usr\/bin\/ceph-expor\u2026\" 7 minutes ago Up 7 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-ceph-exporter-node01\nad8e70e8454f quay.io\/ceph\/ceph:v18 \"\/usr\/bin\/ceph-mgr -\u2026\" 8 minutes ago Up 8 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-mgr-node01-ywnnwt\ndc0ded459bf9 quay.io\/ceph\/ceph:v18 \"\/usr\/bin\/ceph-mon -\u2026\" 8 minutes ago Up 8 minutes ceph-b930f228-87e2-11ee-a4e6-52540054ad0e-mon-node01\n<\/code><\/pre>\n\n\n\nIf you are using Podman, then list the containers using;<\/p>\n\n\n\n
podman ps<\/code><\/pre>\n\n\n\nSimilarly, systemd unit files are also created for these containers;<\/p>\n\n\n\n
systemctl list-units 'ceph*'<\/code><\/pre>\n\n\n\n UNIT LOAD ACTIVE SUB DESCRIPTION \n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@alertmanager.node01.service loaded active running Ceph alertmanager.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@ceph-exporter.node01.service loaded active running Ceph ceph-exporter.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@crash.node01.service loaded active running Ceph crash.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@grafana.node01.service loaded active running Ceph grafana.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@mgr.node01.ywnnwt.service loaded active running Ceph mgr.node01.ywnnwt for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@mon.node01.service loaded active running Ceph mon.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@node-exporter.node01.service loaded active running Ceph node-exporter.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e@prometheus.node01.service loaded active running Ceph prometheus.node01 for b930f228-87e2-11ee-a4e6-52540054ad0e \n ceph-b930f228-87e2-11ee-a4e6-52540054ad0e.target loaded active active Ceph cluster b930f228-87e2-11ee-a4e6-52540054ad0e\n ceph.target loaded active active All Ceph clusters and services\n\nLOAD = Reflects whether the unit definition was properly loaded.\nACTIVE = The high-level unit activation state, i.e. generalization of SUB.\nSUB = The low-level unit activation state, values depend on unit type.\n10 loaded units listed. Pass --all to see loaded but inactive units, too.\nTo show all installed unit files use 'systemctl list-unit-files'.\n<\/code><\/pre>\n\n\n\nEnable Ceph CLI<\/h3>\n\n\n\n
When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI;<\/p>\n\n\n\n
\/usr\/sbin\/cephadm shell \\\n\t--fsid b930f228-87e2-11ee-a4e6-52540054ad0e \\\n\t-c \/etc\/ceph\/ceph.conf \\\n\t-k \/etc\/ceph\/ceph.client.admin.keyring\n<\/code><\/pre>\n\n\n\nThis drops you onto Ceph container CLI;<\/p>\n\n\n\n
You can run the ceph commands eg to check the Ceph status;<\/p>\n\n\n\n
ceph -s<\/code><\/pre>\n\n\n\n[ceph: root@node01 \/]# ceph -s\n cluster:\n id: b930f228-87e2-11ee-a4e6-52540054ad0e\n health: HEALTH_WARN\n OSD count 0 < osd_pool_default_size 3\n \n services:\n mon: 1 daemons, quorum node01 (age 7m)\n mgr: node01.ywnnwt(active, since 6m)\n osd: 0 osds: 0 up, 0 in\n \n data:\n pools: 0 pools, 0 pgs\n objects: 0 objects, 0 B\n usage: 0 B used, 0 B \/ 0 B avail\n pgs:\n<\/code><\/pre>\n\n\n\nYou can exit the ceph CLI by pressing Ctrl+D<\/strong> or type exit<\/strong> and press ENTER.<\/p>\n\n\n\nThere are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.<\/p>\n\n\n\n
cephadm shell -- ceph -s<\/code><\/pre>\n\n\n\nOr You could install Ceph CLI tools on the host;<\/em><\/p>\n\n\n\ncephadm add-repo --release <\/em>reef<\/code><\/pre>\n\n\n\ncephadm install ceph-common<\/em><\/code><\/pre>\n\n\n\nWith this method, then you can just ran the Ceph commands easily;<\/em><\/p>\n\n\n\nceph -s<\/em><\/code><\/pre>\n\n\n\n