sudo mkdir \/etc\/kolla<\/code><\/pre>\n\n\n\nUpdate the ownership of the Kolla config directory to the user with which you activated Koll-ansible deployment virtual environment as<\/p>\n\n\n\n
sudo chown $USER:$USER \/etc\/kolla<\/code><\/pre>\n\n\n\nCopy the main Kolla configuration file, globals.yml<\/strong><\/code> and the OpenStack services passwords file, passwords.yml<\/strong><\/code> into the Kolla configuration directory above from the virtual environment<\/strong>.<\/p>\n\n\n\ncp $HOME\/kolla-ansible\/share\/kolla-ansible\/etc_examples\/kolla\/* \/etc\/kolla\/<\/code><\/pre>\n\n\n\nConfirm;<\/p>\n\n\n\n
ls \/etc\/kolla<\/code><\/pre>\n\n\n\nglobals.yml passwords.yml<\/code><\/pre>\n\n\n\nCreate Ceph Configuration Directories on OpenStack<\/h4>\n\n\n\n
You need to create a directory to store OpenStack Ceph client configurations. OpenStack ceph clients in this context are the OpenStack nodes running glance_api<\/code>, cinder_volume<\/code>, nova_compute<\/code> and cinder_backup<\/code>.<\/p>\n\n\n\nSince we are using our controller node as our Kolla-ansible deployment node, we will create configuration directories for Glance, Nova and Cinder volume\/backup to store Ceph configurations as follows.<\/p>\n\n\n\n
kifarunix@controller01:~$ mkdir -p \/etc\/kolla\/config\/{glance,cinder\/cinder-volume,cinder\/cinder-backup,nova}<\/strong><\/code><\/pre>\n\n\n\nCopy Ceph Configurations to OpenStack Client Directories<\/h4>\n\n\n\n
Copy the Ceph configuration file, ceph.conf<\/code><\/strong> from the Ceph admin node to each of the OpenStack services directories created above.<\/p>\n\n\n\n192.168.200.200 is my controller01 node.<\/p>\n\n\n\n
Glance:<\/p>\n\n\n\n
ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/glance\/ceph.conf < \/etc\/ceph\/ceph.conf<\/code><\/pre>\n\n\n\nCinder Volume and Backup<\/p>\n\n\n\n
ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/cinder\/cinder-volume\/ceph.conf < \/etc\/ceph\/ceph.conf<\/code><\/pre>\n\n\n\nssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/cinder\/cinder-backup\/ceph.conf < \/etc\/ceph\/ceph.conf<\/code><\/pre>\n\n\n\nNova<\/p>\n\n\n\n
ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/nova\/ceph.conf < \/etc\/ceph\/ceph.conf<\/code><\/pre>\n\n\n\nLogin to your Kolla-ansible deployment node and confirm the above;<\/p>\n\n\n\n
kifarunix@controller01:~$ cat \/etc\/kolla\/config\/glance\/ceph.conf<\/strong><\/code><\/pre>\n\n\n\n# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc\n[global]\nfsid = 1e266088-9480-11ee-a7e1-738d8527cddc\nmon_host = [v2:192.168.200.108:3300\/0,v1:192.168.200.108:6789\/0] [v2:192.168.200.109:3300\/0,v1:192.168.200.109:6789\/0] [v2:192.168.200.110:3300\/0,v1:192.168.200.110:6789\/0]\n<\/code><\/pre>\n\n\n\nConfirm for cinder-backup and cinder-volume as well.<\/p>\n\n\n\n
Create Ceph Credentials for OpenStack Clients<\/h4>\n\n\n\n
As already mentioned, our OpenStack Ceph client in this context is our OpenStack glance\/cinder services.<\/p>\n\n\n\n
Any client that is accessing Ceph cluster needs to authenticate itself on Ceph in order to read, write or manage specific object data in the cluster. To achieve this, Ceph uses CephX protocol. CephX authorizes users\/clients and daemons to perform specific actions within a Ceph cluster. It enforces access control rules that determine which users and daemons are allowed to read, write, or manage data, as well as perform other administrative tasks.<\/p>\n\n\n\n
Cephx authorization is primarily based on the concept of capabilities (commonly abbreviated as caps<\/code><\/strong>). caps are used to describe the permissions granted to an authenticated user to exercise the functionality of the monitors, OSDs, and metadata servers. They can also restrict access to data within a pool, a namespace within a pool, or a set of pools based on their application tags which are permissions granted to users and daemons to perform specific operations.<\/p>\n\n\n\nThe Cephx authentication is enabled by default. It uses shared secret keys for authentication, meaning both the client and Ceph Monitors have a copy of the client\u2019s secret key.<\/p>\n\n\n\n
There are two common naming conventions for Ceph keyring files:<\/p>\n\n\n\n
\n- User keyrings:<\/strong> For user keyrings, the file name typically follows the format
ceph.client.username.keyring<\/code>, where username<\/code> is the name of the Ceph user. For example, the keyring file for the client.admin<\/code> user would be named ceph.client.admin.keyring<\/code>.<\/li>\n\n\n\n- Daemon keyrings:<\/strong> For daemon keyrings, the file name typically follows the format
ceph.service.keyring<\/code>, where service<\/code> is the name of the Ceph daemon. For example, the keyring file for the ceph-osd<\/code> daemon would be named ceph.osd.keyring<\/code>.<\/li>\n<\/ol>\n\n\n\nWhen you deploy Ceph using tools such as cephadm, you will see a keyring file for the admin user, ceph.client.admin.keyring<\/code><\/strong> is created under \/etc\/ceph\/<\/strong><\/code> directory. Authentication keys and capabilities for Ceph users and daemons are stored in a keyring file.<\/p>\n\n\n\nFor example, if you run the ceph health<\/code> command without specifying a user name or keyring, Ceph interprets the command like this:<\/p>\n\n\n\nceph -n client.admin --keyring=\/etc\/ceph\/ceph.client.admin.keyring health<\/code><\/pre>\n\n\n\nYou can list Ceph authentication state (all users in the cluster) using the command below;<\/p>\n\n\n\n
sudo ceph auth ls<\/code><\/pre>\n\n\n\nosd.0\n\tkey: AQDH7HBlXtGxChAA7ToLWEBb+E5rMXy2AHFR7Q==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.1\n\tkey: AQDL7HBl4rD5FBAAYn5E8aT3dX4Evj84IpgSYA==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.2\n\tkey: AQDL7HBlEbhUHRAA10a3R6MI+gZkhwOO\/b\/bWA==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.3\n\tkey: AQDM7HBlT7NaBRAApBwCaAA7KP7kIL9Sa3UlDQ==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.4\n\tkey: AQDU7HBlc6ZBCBAAfc+X7ud86ED+reyxJQ\/4hw==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.5\n\tkey: AQDU7HBlMj2PORAA7XMnB0E9vMLzM\/vhdYlUew==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nclient.admin\n\tkey: AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==\n\tcaps: [mds] allow *\n\tcaps: [mgr] allow *\n\tcaps: [mon] allow *\n\tcaps: [osd] allow *\nclient.bootstrap-mds\n\tkey: AQB26nBlveK\/KxAAALdKzVeoRpTIPMWdZG+0ZA==\n\tcaps: [mon] allow profile bootstrap-mds\nclient.bootstrap-mgr\n\tkey: AQB26nBliOi\/KxAAoOBnL\/3ZDlemCJb1EW\/txA==\n\tcaps: [mon] allow profile bootstrap-mgr\nclient.bootstrap-osd\n\tkey: AQB26nBlWu2\/KxAAbxUUdBFxuldf0GjHR1lBIw==\n\tcaps: [mon] allow profile bootstrap-osd\nclient.bootstrap-rbd\n\tkey: AQB26nBlJfK\/KxAA76mYZYJmpj0tN6s0K2eOLw==\n\tcaps: [mon] allow profile bootstrap-rbd\nclient.bootstrap-rbd-mirror\n\tkey: AQB26nBl9fa\/KxAA3mG63hMRfdeAH0rj+Y4JRg==\n\tcaps: [mon] allow profile bootstrap-rbd-mirror\nclient.bootstrap-rgw\n\tkey: AQB26nBl5\/y\/KxAAOtN2KnJgCh9HkiSSZB6c9g==\n\tcaps: [mon] allow profile bootstrap-rgw\nclient.ceph-exporter.ceph-mon-osd01\n\tkey: AQCo6nBlJXysAxAAFzvIyRTjJZFePGxt6SECyg==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.ceph-exporter.osd02\n\tkey: AQBC7HBlKtjvBRAA1gN4CDBJT2YCMRW7k9F3HQ==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.ceph-exporter.osd03\n\tkey: AQBn7HBlqpWgNxAACz49\/HzFw9iPAu+pm78ncg==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.crash.ceph-mon-osd01\n\tkey: AQCp6nBlqjT6MhAAlimlctchb6CwhgTFX7wBvA==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nclient.crash.osd02\n\tkey: AQBE7HBlXe9vDBAADWjCfeElTOMpG5\/9Qs\/jMw==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nclient.crash.osd03\n\tkey: AQBp7HBl5UGWJhAAEVAsSTGn4LzpnDTxes06LA==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nmgr.ceph-mon-osd01.nubjcu\n\tkey: AQB06nBlQXVMFRAALNjTMBTZi5cL4bDYLpHxCg==\n\tcaps: [mds] allow *\n\tcaps: [mon] profile mgr\n\tcaps: [osd] allow *\nmgr.osd02.zfvugc\n\tkey: AQBG7HBlGKgVHRAAo1MRpjTN8k9vQeuk4oMcyA==\n\tcaps: [mds] allow *\n\tcaps: [mon] profile mgr\n\tcaps: [osd] allow *\n<\/code><\/pre>\n\n\n\nYou can get a specific user keys and capabilities as well. For example, to check for admin user;<\/p>\n\n\n\n
sudo ceph auth get client.admin<\/code><\/pre>\n\n\n\n[client.admin]\n\tkey = AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==\n\tcaps mds = \"allow *\"\n\tcaps mgr = \"allow *\"\n\tcaps mon = \"allow *\"\n\tcaps osd = \"allow *\"\n<\/code><\/pre>\n\n\n\nWithout further ado, create OpenStack Glance\/Cinder Ceph credentials;<\/p>\n\n\n\n
ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=<pool>' mgr 'profile rbd pool=<pool>'<\/code><\/pre>\n\n\n\nReplace the <pool><\/code><\/strong> with your pool name. e.g;<\/p>\n\n\n\nGlance Ceph credentials:<\/p>\n\n\n\n
sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=glance-images'<\/code><\/pre>\n\n\n\nCinder volume and backup credentials;<\/p>\n\n\n\n
sudo ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volume, allow rx pool=glance-images'<\/code><\/pre>\n\n\n\nsudo ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-backup'<\/code><\/pre>\n\n\n\nThe keys will be printed to standard output as well.<\/p>\n\n\n\n
So, what do the commands do exactly?<\/p>\n\n\n\n
\nceph<\/code>: The command-line interface for interacting with the Ceph storage cluster.<\/li>\n\n\n\nauth<\/code>: The auth<\/code> subsystem in Ceph is responsible for managing authentication and authorization.<\/li>\n\n\n\nget-or-create<\/code>: This part of the command is telling Ceph to either retrieve existing authentication information for the specified client (“client.glance”) or create it if it doesn’t exist.<\/li>\n\n\n\nclient.glance<\/code>: This is the name of the Ceph client for which the authentication information is being created or retrieved. In this case, it’s named “glance.”<\/li>\n\n\n\nmon 'allow r'<\/code>: Specifies the permissions for the Monitors (mon) in the Ceph cluster. It grants read-only (‘allow r’) permissions to the monitors.<\/li>\n\n\n\nosd 'allow class-read object_prefix rbd_children, allow rwx pool=glance-images'<\/code>: Specifies the permissions for the Object Storage Daemons (OSD) in the Ceph cluster. It grants the following permissions:\n\nallow class-read object_prefix rbd_children<\/code>: Allows the client to read the class and list the child objects under the “rbd” namespace.<\/li>\n\n\n\nallow rwx pool=glance-images<\/code>: Grants read, write, and execute permissions for the “glance-images” pool.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\nYou can check the details using;<\/p>\n\n\n\n
sudo ceph auth ls<\/code><\/pre>\n\n\n\nosd.0\n\tkey: AQDH7HBlXtGxChAA7ToLWEBb+E5rMXy2AHFR7Q==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.1\n\tkey: AQDL7HBl4rD5FBAAYn5E8aT3dX4Evj84IpgSYA==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.2\n\tkey: AQDL7HBlEbhUHRAA10a3R6MI+gZkhwOO\/b\/bWA==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.3\n\tkey: AQDM7HBlT7NaBRAApBwCaAA7KP7kIL9Sa3UlDQ==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.4\n\tkey: AQDU7HBlc6ZBCBAAfc+X7ud86ED+reyxJQ\/4hw==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nosd.5\n\tkey: AQDU7HBlMj2PORAA7XMnB0E9vMLzM\/vhdYlUew==\n\tcaps: [mgr] allow profile osd\n\tcaps: [mon] allow profile osd\n\tcaps: [osd] allow *\nclient.admin\n\tkey: AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==\n\tcaps: [mds] allow *\n\tcaps: [mgr] allow *\n\tcaps: [mon] allow *\n\tcaps: [osd] allow *\nclient.bootstrap-mds\n\tkey: AQB26nBlveK\/KxAAALdKzVeoRpTIPMWdZG+0ZA==\n\tcaps: [mon] allow profile bootstrap-mds\nclient.bootstrap-mgr\n\tkey: AQB26nBliOi\/KxAAoOBnL\/3ZDlemCJb1EW\/txA==\n\tcaps: [mon] allow profile bootstrap-mgr\nclient.bootstrap-osd\n\tkey: AQB26nBlWu2\/KxAAbxUUdBFxuldf0GjHR1lBIw==\n\tcaps: [mon] allow profile bootstrap-osd\nclient.bootstrap-rbd\n\tkey: AQB26nBlJfK\/KxAA76mYZYJmpj0tN6s0K2eOLw==\n\tcaps: [mon] allow profile bootstrap-rbd\nclient.bootstrap-rbd-mirror\n\tkey: AQB26nBl9fa\/KxAA3mG63hMRfdeAH0rj+Y4JRg==\n\tcaps: [mon] allow profile bootstrap-rbd-mirror\nclient.bootstrap-rgw\n\tkey: AQB26nBl5\/y\/KxAAOtN2KnJgCh9HkiSSZB6c9g==\n\tcaps: [mon] allow profile bootstrap-rgw\nclient.ceph-exporter.ceph-mon-osd01\n\tkey: AQCo6nBlJXysAxAAFzvIyRTjJZFePGxt6SECyg==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.ceph-exporter.osd02\n\tkey: AQBC7HBlKtjvBRAA1gN4CDBJT2YCMRW7k9F3HQ==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.ceph-exporter.osd03\n\tkey: AQBn7HBlqpWgNxAACz49\/HzFw9iPAu+pm78ncg==\n\tcaps: [mgr] allow r\n\tcaps: [mon] allow r\n\tcaps: [osd] allow r\nclient.cinder\n\tkey: AQCgp3JlbaE2GBAAiG1Tpcjm\/DyTXmYClvxTYQ==\n\tcaps: [mon] allow r\n\tcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cinder-volume, allow rx pool=glance-images<\/strong>\nclient.cinder-backup\n\tkey: AQCgp3JliNIvLhAA7rgiP5zxA0wFIYCqFjoHvg==\n\tcaps: [mon] allow r\n\tcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cinder-backup<\/strong>\nclient.crash.ceph-mon-osd01\n\tkey: AQCp6nBlqjT6MhAAlimlctchb6CwhgTFX7wBvA==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nclient.crash.osd02\n\tkey: AQBE7HBlXe9vDBAADWjCfeElTOMpG5\/9Qs\/jMw==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nclient.crash.osd03\n\tkey: AQBp7HBl5UGWJhAAEVAsSTGn4LzpnDTxes06LA==\n\tcaps: [mgr] profile crash\n\tcaps: [mon] profile crash\nclient.glance\n\tkey: AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==\n\tcaps: [mon] allow r\n\tcaps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=glance-images<\/strong>\nmgr.ceph-mon-osd01.nubjcu\n\tkey: AQB06nBlQXVMFRAALNjTMBTZi5cL4bDYLpHxCg==\n\tcaps: [mds] allow *\n\tcaps: [mon] profile mgr\n\tcaps: [osd] allow *\nmgr.osd02.zfvugc\n\tkey: AQBG7HBlGKgVHRAAo1MRpjTN8k9vQeuk4oMcyA==\n\tcaps: [mds] allow *\n\tcaps: [mon] profile mgr\n\tcaps: [osd] allow *\n<\/code><\/pre>\n\n\n\nIf you want to print the key only;<\/p>\n\n\n\n
sudo ceph auth print-key TYPE.ID<\/code><\/pre>\n\n\n\nWhere TYPE<\/code> is either client<\/code>, osd<\/code>, mon<\/code>, or mds<\/code>, and ID<\/code> is the user name or the ID of the daemon.<\/p>\n\n\n\nYou can also use the command, ceph auth get <username><\/code><\/strong>.<\/p>\n\n\n\nsudo ceph auth get client.glance<\/code><\/pre>\n\n\n\nIf for some reasons you want to delete a user and associated caps, use the command ceph auth del<\/code> TYPE.ID.<\/p>\n\n\n\nFor example to delete client.cinder;<\/p>\n\n\n\n
sudo ceph auth del client.cinder<\/code><\/pre>\n\n\n\nCopy Ceph Credentials to OpenStack Clients<\/h4>\n\n\n\n
Once you have the credentials for the OpenStack services generated, copy them from<\/strong> the Ceph admin node to the client.<\/p>\n\n\n\nBe sure to remove the leading tabs on the configuration files.<\/strong><\/p>\n\n\n\nIn our case, we have already created the Ceph configuration files directory on our Kolla-ansible control node, which is controller01. So copying the OpenStack client\/service keys to the node itself is as easy as running the commands below.<\/p>\n\n\n\n
Glance:<\/p>\n\n\n\n
sudo ceph auth get-or-create client.glance | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/glance\/ceph.client.glance.keyring<\/code><\/pre>\n\n\n\nCinder Volume and Backup:<\/p>\n\n\n\n
sudo ceph auth get-or-create client.cinder | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/cinder\/cinder-volume\/ceph.client.cinder.keyring<\/code><\/pre>\n\n\n\nsudo ceph auth get-or-create client.cinder-backup | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/cinder\/cinder-backup\/ceph.client.cinder-backup.keyring<\/code><\/pre>\n\n\n\ncinder-backup<\/code> requires two keyrings for accessing volumes and backup pool. Hence, copy the cinder-volume keyring into cinder-backup configuration directory.<\/p>\n\n\n\nsudo ceph auth get-or-create client.cinder | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/cinder\/cinder-backup\/ceph.client.cinder.keyring<\/code><\/pre>\n\n\n\nNova:<\/p>\n\n\n\n
If you will be booting OpenStack instances using volumes that are stored or needs to be stored on Ceph, then Nova must be configured to access Cinder volume pool on Ceph.<\/p>\n\n\n\n
Thus, copy both the Glance and Cinder volume keyrings to Nova configuration directory.<\/p>\n\n\n\n
sudo ceph auth get-or-create client.glance | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/nova\/ceph.client.glance.keyring<\/code><\/pre>\n\n\n\nsudo ceph auth get-or-create client.cinder | ssh kifarunix@192.168.200.200 tee \/etc\/kolla\/config\/nova\/ceph.client.cinder.keyring<\/code><\/pre>\n\n\n\nConfirm on the client;<\/p>\n\n\n\n
kifarunix@controller01:~$ cat \/etc\/kolla\/config\/glance\/ceph.client.glance.keyring<\/code><\/pre>\n\n\n\n[client.glance]\nkey = AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==<\/code><\/pre>\n\n\n\ncat \/etc\/kolla\/config\/cinder\/cinder-volume\/ceph.client.cinder.keyring<\/code><\/pre>\n\n\n\n[client.cinder]\nkey = AQCgp3JlbaE2GBAAiG1Tpcjm\/DyTXmYClvxTYQ==<\/code><\/pre>\n\n\n\ncat \/etc\/kolla\/config\/cinder\/cinder-backup\/ceph.client.cinder-backup.keyring<\/code><\/pre>\n\n\n\n[client.cinder-backup]\nkey = AQCgp3JliNIvLhAA7rgiP5zxA0wFIYCqFjoHvg==<\/code><\/pre>\n\n\n\ncat \/etc\/kolla\/config\/nova\/ceph.client.cinder.keyring \/etc\/kolla\/config\/nova\/ceph.client.glance.keyring<\/code><\/pre>\n\n\n\n[client.cinder]\nkey = AQCgp3JlbaE2GBAAiG1Tpcjm\/DyTXmYClvxTYQ==\n[client.glance]\nkey = AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==<\/code><\/pre>\n\n\n\nEnable OpenStack Services Cephx Authentication<\/h4>\n\n\n\n
Next, update the Ceph OpenStack services configuration files to enable Cephx authentication.<\/p>\n\n\n\n
kifarunix@controller01:~$ cat \/etc\/kolla\/config\/glance\/ceph.conf<\/strong><\/code><\/pre>\n\n\n\n# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc\n[global]\nfsid = 1e266088-9480-11ee-a7e1-738d8527cddc\nmon_host = [v2:192.168.200.108:3300\/0,v1:192.168.200.108:6789\/0] [v2:192.168.200.109:3300\/0,v1:192.168.200.109:6789\/0] [v2:192.168.200.110:3300\/0,v1:192.168.200.110:6789\/0]\n<\/code><\/pre>\n\n\n\nSo, you need to update the configuration file to define the path to the keyring. The keyring will be installed under \/etc\/ceph\/ceph.client.glance.keyring<\/code><\/strong>.<\/p>\n\n\n\nSimilarly, enable Cephx authentication by adding the lines;<\/p>\n\n\n\n
auth_cluster_required = cephx\nauth_service_required = cephx\nauth_client_required = cephx\n<\/code><\/pre>\n\n\n\n\nNote that the Ceph generated configuration files have leading tabs. These tabs break Kolla Ansible\u2019s ini parser. Be sure to remove the leading tabs from your ceph.conf<\/code> files when copying them in the following sections.<\/p>\n<\/blockquote>\n\n\n\nSee our updated config for Glance (leading tabs removed<\/strong>);<\/p>\n\n\n\nGlance:<\/p>\n\n\n\n
kifarunix@controller01:~$ cat \/etc\/kolla\/config\/glance\/ceph.conf<\/strong><\/code><\/pre>\n\n\n\n# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc\n[global]\nfsid = 1e266088-9480-11ee-a7e1-738d8527cddc\nmon_host = [v2:192.168.200.108:3300\/0,v1:192.168.200.108:6789\/0] [v2:192.168.200.109:3300\/0,v1:192.168.200.109:6789\/0] [v2:192.168.200.110:3300\/0,v1:192.168.200.110:6789\/0]\nauth_cluster_required = cephx\nauth_service_required = cephx\nauth_client_required = cephx\n[client.glance]\nkeyring = \/etc\/ceph\/ceph.client.glance.keyring<\/strong>\n<\/code><\/pre>\n\n\n\nCinder Volume:<\/p>\n\n\n\n
kifarunix@controller01:~$