cephadm:\n Installed: (none)\n Candidate: 16.2.11+ds-2\n Version table:\n 16.2.11+ds-2 500\n 500 http:\/\/deb.debian.org\/debian bookworm\/main amd64 Packages\n<\/code><\/pre>\n\n\n\nTo install the current cephadm release version, you need the current Ceph release repos installed.<\/p>\n\n\n\n
To install Ceph release repos on Debian 12, run the commands below<\/p>\n\n\n\n
wget -q -O- 'https:\/\/download.ceph.com\/keys\/release.asc' | \\\ngpg --dearmor -o \/etc\/apt\/trusted.gpg.d\/cephadm.gpg<\/code><\/pre>\n\n\n\necho deb https:\/\/download.ceph.com\/debian-reef\/ $(lsb_release -sc) main \\\n> \/etc\/apt\/sources.list.d\/ceph.list<\/code><\/pre>\n\n\n\napt update<\/code><\/pre>\n\n\n\nThen, check the available version of cephadm package now.<\/p>\n\n\n\n
apt-cache policy cephadm<\/code><\/pre>\n\n\n\ncephadm:\n Installed: (none)\n Candidate: 18.2.1-1~bpo12+1\n Version table:\n 18.2.1-1~bpo12+1 500\n 500 https:\/\/download.ceph.com\/debian-reef bookworm\/main amd64 Packages\n 16.2.11+ds-2 500\n 500 http:\/\/deb.debian.org\/debian bookworm\/main amd64 Packages\n<\/code><\/pre>\n\n\n\nAs you can see, the Ceph repo provides current release version of cephadm package. Thus, install it as follows;<\/p>\n\n\n\n
apt install cephadm<\/code><\/pre>\n\n\n\nDuring the installation, you may see some errors relating to the cephadm user account that is being created. Since we are using root user account to bootstrap our Ceph cluster, then we ignore this error.<\/p>\n\n\n\n
Initialize Ceph Cluster Monitor On Ceph Admin Node<\/h3>\n\n\n\n
Your nodes are now ready to deploy a Ceph storage cluster.<\/p>\n\n\n\n
It is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph admin node. Thus, run the command below, substituting the IP address with that of the Ceph admin node<\/strong> accordingly.<\/p>\n\n\n\ncephadm bootstrap --mon-ip 192.168.122.170<\/code><\/pre>\n\n\n\nCreating directory \/etc\/ceph for ceph.conf\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\ndocker (\/usr\/bin\/docker) is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 456f0baa-affa-11ee-be1c-525400575614\nVerifying IP 192.168.122.170 port 3300 ...\nVerifying IP 192.168.122.170 port 6789 ...\nMon IP `192.168.122.170` is in CIDR network `192.168.122.0\/24`\nMon IP `192.168.122.170` is in CIDR network `192.168.122.0\/24`\nInternal network (--cluster-network) has not been provided, OSD replication will default to the public_network\nPulling container image quay.io\/ceph\/ceph:v18...\nCeph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)\nExtracting ceph user uid\/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nWaiting for mon to start...\nWaiting for mon...\nmon is available\nAssimilating anything we can from ceph.conf...\nGenerating new minimal ceph.conf...\nRestarting the monitor...\nSetting public_network to 192.168.122.0\/24 in mon config section\nWrote config to \/etc\/ceph\/ceph.conf\nWrote keyring to \/etc\/ceph\/ceph.client.admin.keyring\nCreating mgr...\nVerifying port 0.0.0.0:9283 ...\nVerifying port 0.0.0.0:8765 ...\nVerifying port 0.0.0.0:8443 ...\nWaiting for mgr to start...\nWaiting for mgr...\nmgr not available, waiting (1\/15)...\nmgr not available, waiting (2\/15)...\nmgr not available, waiting (3\/15)...\nmgr is available\nEnabling cephadm module...\nWaiting for the mgr to restart...\nWaiting for mgr epoch 5...\nmgr epoch 5 is available\nSetting orchestrator backend to cephadm...\nGenerating ssh key...\nWrote public SSH key to \/etc\/ceph\/ceph.pub\nAdding key to root@localhost authorized_keys...\nAdding host ceph-mgr-mon01...\nDeploying mon service with default placement...\nDeploying mgr service with default placement...\nDeploying crash service with default placement...\nDeploying ceph-exporter service with default placement...\nDeploying prometheus service with default placement...\nDeploying grafana service with default placement...\nDeploying node-exporter service with default placement...\nDeploying alertmanager service with default placement...\nEnabling the dashboard module...\nWaiting for the mgr to restart...\nWaiting for mgr epoch 9...\nmgr epoch 9 is available\nGenerating a dashboard self-signed certificate...\nCreating initial admin user...\nFetching dashboard port number...\nCeph Dashboard is now available at:\n\n\t URL: https:\/\/ceph-mgr-mon01:8443\/\n\t User: admin\n\tPassword: 0lquv02zaw\n\nEnabling client.admin keyring and conf on hosts with \"admin\" label\nSaving cluster configuration to \/var\/lib\/ceph\/456f0baa-affa-11ee-be1c-525400575614\/config directory\nEnabling autotune for osd_memory_target\nYou can access the Ceph CLI as following in case of multi-cluster or non-default config:\n\n\tsudo \/usr\/sbin\/cephadm shell --fsid 456f0baa-affa-11ee-be1c-525400575614 -c \/etc\/ceph\/ceph.conf -k \/etc\/ceph\/ceph.client.admin.keyring\n\nOr, if you are only running a single cluster on this host:\n\n\tsudo \/usr\/sbin\/cephadm shell \n\nPlease consider enabling telemetry to help improve Ceph:\n\n\tceph telemetry on\n\nFor more information see:\n\n\thttps:\/\/docs.ceph.com\/en\/latest\/mgr\/telemetry\/\n\nBootstrap complete.\n<\/code><\/pre>\n\n\n\nAccording to the documentation, the bootstrap command;<\/p>\n\n\n\n
\n- Create a monitor and manager daemon for the new cluster on the localhost.<\/em><\/li>\n\n\n\n
- Generate a new SSH key for the Ceph cluster and add it to the root user\u2019s
\/root\/.ssh\/authorized_keys<\/code> file.<\/em><\/li>\n\n\n\n- Write a copy of the public key to
\/etc\/ceph\/ceph.pub<\/code>.<\/em><\/li>\n\n\n\n- Write a minimal configuration file to
\/etc\/ceph\/ceph.conf<\/code>. This file is needed to communicate with the new cluster.<\/em><\/li>\n\n\n\n- Write a copy of the
client.admin<\/code> administrative (privileged!) secret key to \/etc\/ceph\/ceph.client.admin.keyring<\/code>.<\/em><\/li>\n\n\n\n- Add the
_admin<\/code> label to the bootstrap host. By default, any host with this label will (also) get a copy of \/etc\/ceph\/ceph.conf<\/code> and \/etc\/ceph\/ceph.client.admin.keyring<\/code>.<\/em><\/li>\n<\/ul>\n\n\n\nEnable Ceph CLI<\/h3>\n\n\n\n
When bootstrap command completes, a command for accessing Ceph CLI is provided. Execute that command to access Ceph CLI, in case of multi-cluster or non-default config:<\/p>\n\n\n\n
sudo \/usr\/sbin\/cephadm shell \\\n\t--fsid 456f0baa-affa-11ee-be1c-525400575614 \\\n\t-c \/etc\/ceph\/ceph.conf \\\n\t-k \/etc\/ceph\/ceph.client.admin.keyring\n<\/code><\/pre>\n\n\n\nOtherwise, for the default config, just execute;<\/p>\n\n\n\n
sudo cephadm shell<\/code><\/pre>\n\n\n\nThis drops you onto Ceph CLI. You should see your shell prompt change!<\/p>\n\n\n\n
root@ceph-mgr-mon01:\/#<\/code><\/pre>\n\n\n\nYou can run the ceph commands eg to check the Ceph status;<\/p>\n\n\n\n
ceph -s<\/code><\/pre>\n\n\n\n cluster:\n id: 456f0baa-affa-11ee-be1c-525400575614\n health: HEALTH_WARN\n OSD count 0 < osd_pool_default_size 3\n \n services:\n mon: 1 daemons, quorum ceph-mgr-mon01 (age 8m)\n mgr: ceph-mgr-mon01.gioqld(active, since 7m)\n osd: 0 osds: 0 up, 0 in\n \n data:\n pools: 0 pools, 0 pgs\n objects: 0 objects, 0 B\n usage: 0 B used, 0 B \/ 0 B avail\n pgs:\n<\/code><\/pre>\n\n\n\nYou can exit the ceph CLI by pressing Ctrl+D<\/strong> or type exit<\/strong> and press ENTER.<\/p>\n\n\n\nThere are other ways in which you can access the Ceph CLI. For example, you can run Ceph CLI commands using cephadm command.<\/p>\n\n\n\n
cephadm shell -- ceph -s<\/code><\/pre>\n\n\n\nOr You could install Ceph CLI tools on the host (ignore errors about cephadm user account);<\/em><\/p>\n\n\n\napt install ceph-common<\/em><\/code><\/pre>\n\n\n\nWith this method, then you can just ran the Ceph commands easily;<\/em><\/p>\n\n\n\n