cephadm requires container support (podman or docker) and Python 3<\/em>.<\/li>\n<\/ul>\n\n\n\nTo install cephadm on Ubuntu 22.04, you can EITHER do it using apt or simply downloading the binary and install it on the system.<\/p>\n\n\n\n
The method to use will depend on the versions of Ceph version you are deploying. For example, we are installing Ceph Reef in this guide. Ceph Reef, which is currently, as of this post update, version 18.2.0.<\/p>\n\n\n\n
If you check the cephadm utility provided by the default repos, it is a lower version;<\/p>\n\n\n\n
apt-cache policy cephadm<\/code><\/pre>\n\n\n\ncephadm:\n Installed: (none)\n Candidate: 17.2.6-0ubuntu0.22.04.1\n Version table:\n 17.2.6-0ubuntu0.22.04.1 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy-updates\/universe amd64 Packages\n 17.2.5-0ubuntu0.22.04.3 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy-security\/universe amd64 Packages\n 17.1.0-0ubuntu3 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy\/universe amd64 Packages\n<\/code><\/pre>\n\n\n\nThe surest way to install the latest version of cephadm is via installing the Ceph package repository;<\/p>\n\n\n\n
wget -q -O- 'https:\/\/download.ceph.com\/keys\/release.asc' | \\\ngpg --dearmor -o \/etc\/apt\/trusted.gpg.d\/cephadm.gpg<\/code><\/pre>\n\n\n\necho deb https:\/\/download.ceph.com\/debian-reef\/ $(lsb_release -sc) main \\\n> \/etc\/apt\/sources.list.d\/cephadm.list<\/code><\/pre>\n\n\n\napt update<\/code><\/pre>\n\n\n\nConfirm the version;<\/p>\n\n\n\n
apt-cache policy cephadm<\/code><\/pre>\n\n\n\ncephadm:\n Installed: (none)\n Candidate: 18.2.0-1jammy\n Version table:\n 18.2.0-1jammy 500\n 500 https:\/\/download.ceph.com\/debian-reef jammy\/main amd64 Packages\n 17.2.6-0ubuntu0.22.04.1 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy-updates\/universe amd64 Packages\n 17.2.5-0ubuntu0.22.04.3 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy-security\/universe amd64 Packages\n 17.1.0-0ubuntu3 500\n 500 http:\/\/de.archive.ubuntu.com\/ubuntu jammy\/universe amd64 Packages\n\n<\/code><\/pre>\n\n\n\napt install cephadm<\/code><\/pre>\n\n\n\nInitialize Ceph Cluster Monitor On Ceph Admin Node<\/h4>\n\n\n\n
Your nodes are now ready to deploy a Ceph storage cluster. To begin with, switch to cephadmin<\/code><\/strong> user;<\/p>\n\n\n\nsu - cephadmin<\/code><\/pre>\n\n\n\nwhoami<\/code><\/pre>\n\n\n\nOutput;<\/p>\n\n\n\n
cephadmin<\/code><\/pre>\n\n\n\nIt is now time to bootstrap the Ceph cluster in order to create the first Ceph monitor daemon on Ceph admin node. Thus, run the command below, substituting the IP address with that of the Ceph admin node accordingly.<\/p>\n\n\n\n
sudo cephadm bootstrap --mon-ip 192.168.122.240<\/code><\/pre>\n\n\n\nCreating directory \/etc\/ceph for ceph.conf\nVerifying podman|docker is present...\nVerifying lvm2 is present...\nVerifying time synchronization is in place...\nUnit chrony.service is enabled and running\nRepeating the final host check...\npodman (\/usr\/bin\/podman) version 3.4.4 is present\nsystemctl is present\nlvcreate is present\nUnit chrony.service is enabled and running\nHost looks OK\nCluster fsid: 70d227de-83e3-11ee-9dda-ff8b7941e415\nVerifying IP 192.168.122.240 port 3300 ...\nVerifying IP 192.168.122.240 port 6789 ...\nMon IP `192.168.122.240` is in CIDR network `192.168.122.0\/24`\nMon IP `192.168.122.240` is in CIDR network `192.168.122.0\/24`\nInternal network (--cluster-network) has not been provided, OSD replication will default to the public_network\nPulling container image quay.io\/ceph\/ceph:v18...\nCeph version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)\nExtracting ceph user uid\/gid from container image...\nCreating initial keys...\nCreating initial monmap...\nCreating mon...\nWaiting for mon to start...\nWaiting for mon...\nmon is available\nAssimilating anything we can from ceph.conf...\nGenerating new minimal ceph.conf...\nRestarting the monitor...\nSetting mon public_network to 192.168.122.0\/24\nWrote config to \/etc\/ceph\/ceph.conf\nWrote keyring to \/etc\/ceph\/ceph.client.admin.keyring\nCreating mgr...\nVerifying port 9283 ...\nVerifying port 8765 ...\nVerifying port 8443 ...\nWaiting for mgr to start...\nWaiting for mgr...\nmgr not available, waiting (1\/15)...\nmgr not available, waiting (2\/15)...\nmgr not available, waiting (3\/15)...\nmgr is available\nEnabling cephadm module...\nWaiting for the mgr to restart...\nWaiting for mgr epoch 5...\nmgr epoch 5 is available\nSetting orchestrator backend to cephadm...\nGenerating ssh key...\nWrote public SSH key to \/etc\/ceph\/ceph.pub\nAdding key to root@localhost authorized_keys...\nAdding host ceph-admin...\nDeploying mon service with default placement...\nDeploying mgr service with default placement...\nDeploying crash service with default placement...\nDeploying ceph-exporter service with default placement...\nDeploying prometheus service with default placement...\nDeploying grafana service with default placement...\nDeploying node-exporter service with default placement...\nDeploying alertmanager service with default placement...\nEnabling the dashboard module...\nWaiting for the mgr to restart...\nWaiting for mgr epoch 9...\nmgr epoch 9 is available\nGenerating a dashboard self-signed certificate...\nCreating initial admin user...\nFetching dashboard port number...\nCeph Dashboard is now available at:\n\n\t URL: https:\/\/ceph-admin:8443\/\n\t User: admin\n\tPassword: hnrpt41gff\n\nEnabling client.admin keyring and conf on hosts with \"admin\" label\nSaving cluster configuration to \/var\/lib\/ceph\/70d227de-83e3-11ee-9dda-ff8b7941e415\/config directory\nEnabling autotune for osd_memory_target\nYou can access the Ceph CLI as following in case of multi-cluster or non-default config:\n\n\tsudo \/usr\/sbin\/cephadm shell --fsid 70d227de-83e3-11ee-9dda-ff8b7941e415 -c \/etc\/ceph\/ceph.conf -k \/etc\/ceph\/ceph.client.admin.keyring\n\nOr, if you are only running a single cluster on this host:\n\n\tsudo \/usr\/sbin\/cephadm shell \n\nPlease consider enabling telemetry to help improve Ceph:\n\n\tceph telemetry on\n\nFor more information see:\n\n\thttps:\/\/docs.ceph.com\/en\/latest\/mgr\/telemetry\/\n\nBootstrap complete.\n<\/code><\/pre>\n\n\n\nAccording to the documentation<\/a>, the bootstrap command;<\/p>\n\n\n\n\n- Create a monitor and manager daemon for the new cluster on the localhost.<\/em><\/li>\n\n\n\n
- Generate a new SSH key for the Ceph cluster and add it to the root user\u2019s
\/root\/.ssh\/authorized_keys<\/code> file.<\/em><\/li>\n\n\n\n- Write a copy of the public key to
\/etc\/ceph\/ceph.pub<\/code>.<\/em><\/li>\n\n\n\n- Write a minimal configuration file to
\/etc\/ceph\/ceph.conf<\/code>. This file is needed to communicate with the new cluster.<\/em><\/li>\n\n\n\n