sudo \/usr\/bin\/cephadm shell --fsid f959b65e-91c2-11ec-9776-abbffb8a52a1 -c \/etc\/ceph\/ceph.conf -k \/etc\/ceph\/ceph.client.admin.keyring<\/code><\/pre>\n\n\n\nThis drops you onto Ceph Docker CLI;<\/p>\n\n\n\n
You can run the ceph commands eg to check the Ceph status;<\/p>\n\n\n\n
sudo ceph -s<\/code><\/pre>\n\n\n\n\n cluster:\n id: f959b65e-91c2-11ec-9776-abbffb8a52a1\n health: HEALTH_WARN\n OSD count 0 < osd_pool_default_size 3\n \n services:\n mon: 1 daemons, quorum ceph-admin (age 23m)\n mgr: ceph-admin.yxxusl(active, since 20m)\n osd: 0 osds: 0 up, 0 in\n \n data:\n pools: 0 pools, 0 pgs\n objects: 0 objects, 0 B\n usage: 0 B used, 0 B \/ 0 B avail\n pgs: \n<\/code><\/pre>\n\n\n\nYou can exit the docker CLI by pressing Ctrl+D<\/strong> or enter exit<\/strong>.<\/p>\n\n\n\nThere are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.<\/p>\n\n\n\n
sudo cephadm shell -- ceph -s<\/code><\/pre>\n\n\n\nOr Install Ceph CLI tools on the host;<\/p>\n\n\n\n
sudo cephadm add-repo --release pacific\nsudo cephadm install ceph-common<\/code><\/pre>\n\n\n\nWith this method, then you can just ran the Ceph commands easily;<\/p>\n\n\n\n
sudo ceph -s<\/code><\/pre>\n\n\n\nAdd Ceph Monitor Node to Ceph Cluster<\/h3>\n\n\n\n
At this point, we have just provisioned Ceph Admin node only;<\/p>\n\n\n\n
sudo ceph orch host ls<\/code><\/pre>\n\n\n\nSample output;<\/p>\n\n\n\n
HOST ADDR LABELS STATUS \nceph-admin 192.168.59.31 _admin <\/code><\/pre>\n\n\n\nSo next, add the Ceph Monitor node to the cluster;<\/p>\n\n\n\n
Copy the SSH key generated by the bootstrap command to Ceph Monitor's root user account. Ensure Root Login is permitted on the Ceph monitor node.<\/p>\n\n\n\n
sudo ssh-copy-id -f -i \/etc\/ceph\/ceph.pub root@ceph-mon<\/code><\/pre>\n\n\n\nOnce you have copied the Ceph SSH public key, execute the command below to add the Ceph Monitor to the cluster;<\/p>\n\n\n\n
sudo ceph orch host add ceph-mon<\/code><\/pre>\n\n\n\nSample command output;<\/p>\n\n\n\n
Added host 'ceph-mon' with addr '192.168.59.30'<\/code><\/pre>\n\n\n\nNext, label the host with its role (remember our ceph-monitor also doubles up as an OSD);<\/p>\n\n\n\n
sudo ceph orch host label add ceph-mon mon\/osd<\/code><\/pre>\n\n\n\nAdd Ceph OSD Nodes to Ceph Cluster<\/h3>\n\n\n\n
Similarly, copy the SSH keys to the OSD Nodes;<\/p>\n\n\n\n
for i in ceph-osd1 ceph-osd2; do sudo ssh-copy-id -f -i \/etc\/ceph\/ceph.pub root@$i; done<\/code><\/pre>\n\n\n\nAdd them to the cluster.<\/p>\n\n\n\n
sudo ceph orch host add ceph-osd1<\/code><\/pre>\n\n\n\nsudo ceph orch host add ceph-osd2<\/code><\/pre>\n\n\n\nDefine their respective labels;<\/p>\n\n\n\n
for i in ceph-osd1 ceph-osd2; do sudo ceph orch host label add $i osd; done<\/code><\/pre>\n\n\n\nList Ceph Cluster Nodes;<\/h3>\n\n\n\n
You can list the Ceph cluster nodes;<\/p>\n\n\n\n