Volume-Backed Live Migration:<\/strong> In this scenario, instances use volumes for storage instead of ephemeral disks. This method is faster than block live migration because the disk images do not need to be copied. However, it is still slower than shared storage-based live migration because the block storage volumes need to be attached to the destination host. Block storage backends such as Ceph, Cinder, GlusterFS e.t.c support volume-backed live migration <\/li>\n<\/ol>\n\n\n\nThese classifications help determine the method of live migration suitable for your specific instance and storage setup.<\/p>\n\n\n\n
You can do the migration from the horizon dashboard or from the command line.<\/p>\n\n\n\n
Kindly note that Openstack instance migration is a proactive and planned operation. There are some situations where a compute node may experience emergencies such hardware failures or similar. In such situations, you might want to use the evacute<\/strong> process instead.<\/p>\n\n\n\nGet a List of Running Instances on Compute Node to Remove<\/h4>\n\n\n\n To begin with, get a list of all instances running on the compute node you need to remove. For example, below is a list of instances running on our compute02 node;<\/p>\n\n\n\n
openstack server list --host compute02 --all-projects<\/code><\/pre>\n\n\n\nSample output;<\/p>\n\n\n\n
+--------------------------------------+-----------------+--------+-------------------------+--------+---------+\n| ID | Name | Status | Networks | Image | Flavor |\n+--------------------------------------+-----------------+--------+-------------------------+--------+---------+\n| 9eaa3419-47cf-40bd-a981-92517c81e2c7 | gracious_turing | ACTIVE | DEMO_NET=192.168.50.128 | cirros | custom1 |\n+--------------------------------------+-----------------+--------+-------------------------+--------+---------+\n<\/code><\/pre>\n\n\n\nGet a List of Compute Nodes<\/h4>\n\n\n\n Similarly, you can also list compute nodes available (just in case you want to explicitly specify which node to migrate an instance to, otherwise the nova scheduler takes care of all the decisions on where to place the instance being migrated, incase you have multiple compute nodes).<\/p>\n\n\n\n
openstack hypervisor list<\/code><\/pre>\n\n\n\n+--------------------------------------+---------------------+-----------------+-----------------+-------+\n| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |\n+--------------------------------------+---------------------+-----------------+-----------------+-------+\n| 6aa76044-d456-4c3b-8f28-fcfc7e79b658 | compute01 | QEMU | 192.168.200.202 | up |\n| 7365f5eb-62e1-477e-bf45-8f77ea98802a | compute02 | QEMU | 192.168.200.203 | up |\n+--------------------------------------+---------------------+-----------------+-----------------+-------+\n<\/code><\/pre>\n\n\n\nMigrate OpenStack Instances to Other Compute Nodes<\/h4>\n\n\n\n Once you have the information about compute nodes, you can now proceed to migrate your instances.<\/p>\n\n\n\n
As already mentioned, depending on the criticality of the operations\/services handled by an instance, you can choose to go the cold or the live migration way.<\/p>\n\n\n\n
OpenStack instances can be migrated using the command, openstack server migrate<\/code><\/strong>.<\/p>\n\n\n\nopenstack server migrate --help<\/code><\/pre>\n\n\n\nusage: openstack server migrate [-h] [--live-migration] [--host ] [--shared-migration | --block-migration] [--disk-overcommit | --no-disk-overcommit]\n [--wait]\n \n\nMigrate server to different host. A migrate operation is implemented as a resize operation using the same flavor as the old server. This means that, like resize, migrate\nworks by creating a new server using the same flavor and copying the contents of the original disk into a new one. As with resize, the migrate operation is a two-step\nprocess for the user: the first step is to perform the migrate, and the second step is to either confirm (verify) success and release the old server, or to declare a\nrevert to release the new server and restart the old one.\n\npositional arguments:\n Server (name or ID)\n\noptions:\n -h, --help show this help message and exit\n --live-migration Live migrate the server; use the ``--host`` option to specify a target host for the migration which will be validated by the scheduler\n --host \n Migrate the server to the specified host. (supported with --os-compute-api-version 2.30 or above when used with the --live-migration option)\n (supported with --os-compute-api-version 2.56 or above when used without the --live-migration option)\n --shared-migration Perform a shared live migration (default before --os-compute-api-version 2.25, auto after)\n --block-migration Perform a block live migration (auto-configured from --os-compute-api-version 2.25)\n --disk-overcommit Allow disk over-commit on the destination host(supported with --os-compute-api-version 2.24 or below)\n --no-disk-overcommit Do not over-commit disk on the destination host (default)(supported with --os-compute-api-version 2.24 or below)\n --wait Wait for migrate to complete\n<\/code><\/pre>\n\n\n\nSo, let’s live migrate my instance, gracious_turing<\/code><\/strong>, with the UUID, ee54d242-4fdd-4a3b-8ee5-30b3171e1df6<\/code><\/strong>.<\/p>\n\n\n\nNote that the instance is booting from an image and no shared storage, hence, we will do block-based live migration;<\/p>\n\n\n\n
openstack server migrate --live-migration --block-migration gracious_turing --wait<\/code><\/pre>\n\n\n\nIf you check on horizon, under instances, you will see the instance status as migrating.<\/p>\n\n\n\n <\/figure>\n\n\n\nIf you want to do cold migration, then you can shut down an instance and migrate them.<\/p>\n\n\n\n
Verify Instance Migration<\/h3>\n\n\n\n After a short while, the instance migration should be completed. Since I have only two compute nodes, the instance should have been migrated to compute01;<\/p>\n\n\n\n <\/figure>\n\n\n\nYou can also check instances from command line;<\/p>\n\n\n\n
openstack server list --all-projects --long<\/code><\/pre>\n\n\n\n+--------------------------------------+-------------------+--------+------------+-------------+-------------------------+------------+--------------------------------------+---------+-------------------+-----------+------------+-------------+\n| ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor | Availability Zone | Host | Properties | Host Status |\n+--------------------------------------+-------------------+--------+------------+-------------+-------------------------+------------+--------------------------------------+---------+-------------------+-----------+------------+-------------+\n| 9eaa3419-47cf-40bd-a981-92517c81e2c7 | gracious_turing | ACTIVE | None | Running | DEMO_NET=192.168.50.128 | cirros | 25dead1a-874c-4f19-b0b5-8ea739a15796 | custom1 | nova | compute01 | | UP |\n| 6ea369b3-27f1-44d2-93aa-6f6e94533e6d | peaceful_hamilton | ACTIVE | None | Running | DEMO_NET=192.168.50.113 | cirros | 25dead1a-874c-4f19-b0b5-8ea739a15796 | custom1 | nova | compute01 | | UP |\n| c4f95fa1-d5ed-4765-8305-04b2c559dd83 | vibrant_torvalds | ACTIVE | None | Running | DEMO_NET=192.168.50.150 | cirros | 25dead1a-874c-4f19-b0b5-8ea739a15796 | custom1 | nova | compute01 | | UP |\n+--------------------------------------+-------------------+--------+------------+-------------+-------------------------+------------+--------------------------------------+---------+-------------------+-----------+------------+-------------+\n<\/code><\/pre>\n\n\n\nAs you can see, all instances are running on compute01 node now.<\/p>\n\n\n\n
Migrate Volumes (If Applicable)<\/h3>\n\n\n\n If the compute node had instances volumes attached to it, then you need to migrate the volumes as well.<\/p>\n\n\n\n
Use the openstack volume migrate<\/code> command to migrate the volumes associated with instance from one compute node to another.<\/p>\n\n\n\nopenstack volume migrate --help<\/code><\/pre>\n\n\n\nusage: openstack volume migrate [-h] --host [--force-host-copy] [--lock-volume] \n\nMigrate volume to a new host\n\npositional arguments:\n Volume to migrate (name or ID)\n\noptions:\n -h, --help show this help message and exit\n --host \n Destination host (takes the form: host@backend-name#pool)\n --force-host-copy Enable generic host-based force-migration, which bypasses driver optimizations\n --lock-volume If specified, the volume state will be locked and will not allow a migration to be aborted (possibly by another operation)\n<\/code><\/pre>\n\n\n\nStop all OpenStack services running on the compute node<\/h3>\n\n\n\n Once the instances on the compute node are migrated, you can now login to compute node and stop all openstack services.<\/p>\n\n\n\n
If you are using ansible, then you can use it to check and stop the services on the compute node.<\/p>\n\n\n\n
For example, let’s verify, from the controller\/ansible node, all openstack services running on our compute02<\/p>\n\n\n\n
ansible -i multinode -m raw -a \"docker ps\" compute02<\/code><\/pre>\n\n\n\ncompute02 | CHANGED | rc=0 >>\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nefe871ef9fbf quay.io\/openstack.kolla\/zun-cni-daemon:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) zun_cni_daemon\nf6155141547b quay.io\/openstack.kolla\/zun-compute:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) zun_compute\n143e53a3b9de quay.io\/openstack.kolla\/ceilometer-compute:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) ceilometer_compute\nda3bb6f8f71b quay.io\/openstack.kolla\/kuryr-libnetwork:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) kuryr\n7fa1016b0acf quay.io\/openstack.kolla\/neutron-openvswitch-agent:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) neutron_openvswitch_agent\n98016d47c4d6 quay.io\/openstack.kolla\/openvswitch-vswitchd:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) openvswitch_vswitchd\n2676319cfbdc quay.io\/openstack.kolla\/openvswitch-db-server:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) openvswitch_db\n8b750f8dc593 quay.io\/openstack.kolla\/nova-compute:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) nova_compute\n84397013842c quay.io\/openstack.kolla\/nova-libvirt:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) nova_libvirt\n3768d9da5ab7 quay.io\/openstack.kolla\/nova-ssh:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days (healthy) nova_ssh\nec5a5dd65cb4 quay.io\/openstack.kolla\/iscsid:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days iscsid\nf4185c0884ae quay.io\/openstack.kolla\/prometheus-libvirt-exporter:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days prometheus_libvirt_exporter\nd9942be630fa quay.io\/openstack.kolla\/prometheus-cadvisor:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days prometheus_cadvisor\n04fec61c5671 quay.io\/openstack.kolla\/prometheus-node-exporter:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days prometheus_node_exporter\n221098bf97e7 quay.io\/openstack.kolla\/cron:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days cron\n36fc2702d398 quay.io\/openstack.kolla\/kolla-toolbox:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days kolla_toolbox\n80f42d83c6f7 quay.io\/openstack.kolla\/fluentd:2023.1-ubuntu-jammy \"dumb-init --single-\u2026\" 3 days ago Up 3 days fluentd\n<\/code><\/pre>\n\n\n\nThe easiest way to stop these Docker services, remember we deployed our OpenStack using Kolla-Ansible<\/a>, simply stop the docker service.<\/p>\n\n\n\nkolla-ansible -i <inventory> stop --yes-i-really-really-mean-it [ --limit <limit> ]<\/code><\/pre>\n\n\n\nSo, to stop all the Openstack services on compute02;<\/p>\n\n\n\n
source $HOME\/kolla-ansible\/bin\/activate\nsource \/etc\/kolla\/admin-openrc.sh<\/code><\/pre>\n\n\n\nkolla-ansible -i multinode stop --yes-i-really-really-mean-it --limit compute02<\/code><\/pre>\n\n\n\nIf you are not using configuration management tools such Ansible, be sure to stop nova-compute<\/strong> and neutron-linuxbridge-agent<\/strong> when you stop the services.<\/p>\n\n\n\nRemove OpenStack Compute Node Compute Service<\/h3>\n\n\n\n Next, remove the compute node compute service from the database;<\/p>\n\n\n\n
You can execute these commands from control node.<\/p>\n\n\n\n
List the compute services;<\/p>\n\n\n\n
openstack compute service list<\/code><\/pre>\n\n\n\n+--------------------------------------+----------------+--------------+----------+----------+-------+----------------------------+\n| ID | Binary | Host | Zone | Status | State | Updated At |\n+--------------------------------------+----------------+--------------+----------+----------+-------+----------------------------+\n| 67db62aa-58a2-4e66-9a8b-bb1c85bd23e2 | nova-scheduler | controller01 | internal | enabled | up | 2023-11-09T18:07:18.000000 |\n| b9520af1-490d-43b7-98ba-a55b0349b38c | nova-conductor | controller01 | internal | enabled | up | 2023-11-09T18:07:18.000000 |\n| 5fdae690-ddbf-4dc3-a41e-61866858054b | nova-compute | compute01 | nova | enabled | up | 2023-11-09T18:07:17.000000 |\n| 464698d3-0da5-44cb-ba91-7d6782b2cff9 | nova-compute | compute02 | nova | disabled | down | 2023-11-09T18:04:07.000000 |\n+--------------------------------------+----------------+--------------+----------+----------+-------+----------------------------+\n<\/code><\/pre>\n\n\n\nSo, we want to remove compute service on compute02. Hence, obtain the ID of the compute service on the respective node to be removed and proceed to remove the compute service from the node;<\/p>\n\n\n\n
openstack compute service delete 464698d3-0da5-44cb-ba91-7d6782b2cff9<\/code><\/pre>\n\n\n\nRemove OpenStack Compute Node Neutron Agents<\/h3>\n\n\n\n Next, remove the Neutron agents on the compute node.<\/p>\n\n\n\n
You can list the agents as follows;<\/p>\n\n\n\n
openstack network agent list --host <compute-node><\/code><\/pre>\n\n\n\nFor example;<\/p>\n\n\n\n
openstack network agent list --host compute02<\/code><\/pre>\n\n\n\n+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+\n| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |\n+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+\n| 313cd889-08d0-423f-befa-0254bd3bdefc | Open vSwitch agent | compute02 | None | XXX | UP | neutron-openvswitch-agent |\n+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+\n<\/code><\/pre>\n\n\n\nDelete the Agent (openstack network agent delete <agent_id><\/code><\/strong>);<\/p>\n\n\n\nopenstack network agent delete 313cd889-08d0-423f-befa-0254bd3bdefc<\/code><\/pre>\n\n\n\nRemove the hosts from the Ansible inventory<\/h3>\n\n\n\n If you are using Kolla-Ansible, it is now time to delete the compute node from the inventory.<\/p>\n\n\n\n
And that completes our guide on how to safely remove compute node from OpenStack deployment.<\/p>\n\n\n\n
Re-add compute node into OpenStack<\/h3>\n\n\n\n If you want to add new compute node into OpenStack, check our guide below;<\/p>\n\n\n\n
Add Compute Nodes into OpenStack using Kolla-Ansible<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"How can I safely remove compute node from OpenStack deployment? When it comes to managing an OpenStack deployment, ensuring the safe removal of a compute<\/p>\n","protected":false},"author":10,"featured_media":19188,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rank_math_lock_modified_date":false,"footnotes":""},"categories":[121,1885,1886],"tags":[7294,7293,7291],"class_list":["post-19136","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-howtos","category-cloud-compute","category-openstack","tag-compute-node-remove-from-openstack","tag-delete-openstack-compute-node","tag-remove-openstack-compute-node","generate-columns","tablet-grid-50","mobile-grid-100","grid-parent","grid-50","resize-featured-image"],"_links":{"self":[{"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/posts\/19136"}],"collection":[{"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/comments?post=19136"}],"version-history":[{"count":10,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/posts\/19136\/revisions"}],"predecessor-version":[{"id":20884,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/posts\/19136\/revisions\/20884"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/media\/19188"}],"wp:attachment":[{"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/media?parent=19136"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/categories?post=19136"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kifarunix.com\/wp-json\/wp\/v2\/tags?post=19136"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}