{"id":8698,"date":"2021-04-21T23:14:18","date_gmt":"2021-04-21T20:14:18","guid":{"rendered":"https:\/\/kifarunix.com\/?p=8698"},"modified":"2024-03-18T22:53:52","modified_gmt":"2024-03-18T19:53:52","slug":"restore-elasticsearch-snapshot-to-another-cluster","status":"publish","type":"post","link":"https:\/\/kifarunix.com\/restore-elasticsearch-snapshot-to-another-cluster\/","title":{"rendered":"Restore Elasticsearch Snapshot to another Cluster"},"content":{"rendered":"\n
In this tutorial, we will try to show how to restore Elasticsearch snapshot to another Cluster. Elasticsearch data can be backed up by taking a snapshot of the running Elasticsearch cluster. In our previous tutorial, we learnt how to backup and restore a single node Elasticsearch cluster. Link is provided below;<\/p>\n\n\n\n
Backup and Restore Elasticsearch Index Data<\/a><\/p>\n\n\n\n Similarly, in this tutorial, we will still be dealing with a single node Elasticsearch cluster data backup and restore.<\/p>\n\n\n\n The snapshot was taken on Elasticsearch 7.10.1 and we are restoring to Elasticsearch 7.12.1. Read more on version compatibility<\/a>.<\/p>\n\n\n\n As already stated, we are dealing with a single node Elasticsearch cluster.<\/p>\n\n\n\n For the purposes of this demo, we have separate single Elasticsearch cluster nodes. We will call them Before you can restore Elasticsearch data, you need to have taken snapshot of the Elasticsearch cluster, specific indices or data streams on the first node (nodeA).<\/p>\n\n\n\n To take a backup\/snapshot of the Elasticsearch cluster;<\/p>\n\n\n\n Once you have registered and taken a snapshot of the Elasticsearch data on the nodeA, do the same on second Elasticsearch server, nodeB in this case, using the same settings as in the nodeA<\/strong>.<\/p>\n\n\n\n Ensure there is enough space on the other cluster to accommodate all the data backed up.<\/p>\n\n\n\n We have attached a storage disk on nodeB mounted on Define the location of the path to the backup location on Elasticsearch configuration file, use the option, Set the ownership of the repository path to Restart elasticsearch.<\/p>\n\n\n\n Register Backup repository;<\/p>\n\n\n\n Output;<\/p>\n\n\n\n Create a snapshot with the same name as the other snapshot on NodeA;<\/p>\n\n\n\n Sample output;<\/p>\n\n\n\n Listing the contents of the repository directory;<\/p>\n\n\n\n Since we are going to restore snapshot data from another cluster, nodeA in this setup, delete the contents the snapshot data on NodeB.<\/p>\n\n\n\n Next, copy the snapshot data from NodeA to NodeB repository path;<\/p>\n\n\n\n Listing contents of the repository path on NodeB after copying;<\/p>\n\n\n\n Once you have copied the snapshot data to the other node backup\/snapshot repository, proceed to restart Elasticsearch service.<\/p>\n\n\n\n Check the details about the snapshot;<\/p>\n\n\n\n The details should match those of the snapshot in the original node.<\/p>\n\n\n\n You can now restore Elasticsearch data to another cluster.<\/p>\n\n\n\n You can list indices using the command below;<\/p>\n\n\n\n They should be same as on the previous node;<\/p>\n\n\n\n You should similarly be having same data on your Kibana.<\/p>\n\n\n\n Restore a snapshot<\/a><\/p>\n\n\n\n StackOverflow<\/a><\/p>\n\n\n\n Setup Kibana Elasticsearch and Fluentd on CentOS 8<\/a><\/p>\n\n\n\n Setup Multi-node Elasticsearch 7.x Cluster on Fedora 30\/Fedora 29\/CentOS 7<\/a><\/p>\n\n\n\nRestoring Elasticsearch Snapshot to another Cluster<\/h2>\n\n\n\n
nodeA<\/strong><\/code> and
nodeB<\/code><\/strong><\/p>\n\n\n\n
Take Snapshot of Elasticsearch on NodeA<\/h3>\n\n\n\n
\n
Take Snapshot of Elasticsearch on NodeB<\/h3>\n\n\n\n
\/mnt\/es_backup<\/strong><\/code> just like as it was on nodeA;<\/p>\n\n\n\n
df -hT -P \/mnt\/es_backup\/<\/code><\/pre>\n\n\n\n
Filesystem Type Size Used Avail Use% Mounted on\n\/dev\/sdb1 ext4 3.9G 16M 3.7G 1% \/mnt\/es_backup<\/code><\/pre>\n\n\n\n
path.repo<\/strong><\/code>;<\/p>\n\n\n\n
echo 'path.repo: [\"\/mnt\/es_backup\"]' >> \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
elasticsearch<\/strong><\/code> user.<\/p>\n\n\n\n
chown -R elasticsearch: \/mnt\/es_backup\/<\/code><\/pre>\n\n\n\n
systemctl restart elasticsearch<\/code><\/pre>\n\n\n\n
curl -X PUT \"192.168.59.12:9200\/_snapshot\/es_backup?pretty\" -H 'Content-Type: application\/json' -d'\n{\n \"type\": \"fs\",\n \"settings\": {\n \"location\": \"\/mnt\/es_backup\"\n }\n}\n'<\/code><\/pre>\n\n\n\n
{\n \"acknowledged\" : true\n}<\/code><\/pre>\n\n\n\n
Create Snapshot Elasticsearch Cluster on NodeB<\/h3>\n\n\n\n
curl -X PUT \"192.168.59.12:9200\/_snapshot\/es_backup\/es_backup_202104192200?pretty\"<\/code><\/pre>\n\n\n\n
{\n \"accepted\" : true\n}<\/code><\/pre>\n\n\n\n
ls -1 \/mnt\/es_backup\/<\/code><\/pre>\n\n\n\n
index-0\nindex.latest\nmeta-uEpnUzM4QOKOqT0g05Jg5g.dat\nsnap-uEpnUzM4QOKOqT0g05Jg5g.dat<\/code><\/pre>\n\n\n\n
Delete Snapshot Data on NodeB repository<\/h3>\n\n\n\n
rm -rf \/mnt\/es_backup\/*<\/code><\/pre>\n\n\n\n
Copy Snapshot Data from NodeA to NodeB repository<\/h3>\n\n\n\n
rsync -avP \/mnt\/es_backup\/ root@192.168.59.12:\/mnt\/es_backup\/<\/code><\/pre>\n\n\n\n
s -1 \/mnt\/es_backup\/<\/code><\/pre>\n\n\n\n
index-0\nindex.latest\nindices\nmeta-33qzhT82QTmvH4GkWn-vhw.dat\nsnap-33qzhT82QTmvH4GkWn-vhw.dat<\/code><\/pre>\n\n\n\n
Restart Elasticsearch on NodeB<\/h3>\n\n\n\n
systemctl restart elasticsearch<\/code><\/pre>\n\n\n\n
View Snapshot Information<\/h3>\n\n\n\n
curl -X GET \"192.168.59.12:9200\/_snapshot\/es_backup\/es_backup_202104192200?pretty\"<\/code><\/pre>\n\n\n\n
{\n \"snapshots\" : [\n {\n \"snapshot\" : \"es_backup_202104192200\",\n \"uuid\" : \"33qzhT82QTmvH4GkWn-vhw\",\n \"version_id\" : 7100099,\n \"version\" : \"7.10.0\",\n \"indices\" : [\n \".kibana_task_manager_1\",\n \"filebeat-7.12.0-2021.04.19-000001\",\n \"filebeat-7.10.1-2021.04.16-000001\",\n \".kibana-event-log-7.10.0-000001\",\n \".async-search\",\n \".apm-agent-configuration\",\n \"ilm-history-3-000001\",\n \".kibana_1\",\n \".apm-custom-link\"\n ],\n \"data_streams\" : [ ],\n \"include_global_state\" : true,\n \"state\" : \"SUCCESS\",\n \"start_time\" : \"2021-04-19T19:57:08.912Z\",\n \"start_time_in_millis\" : 1618862228912,\n \"end_time\" : \"2021-04-19T19:57:56.691Z\",\n \"end_time_in_millis\" : 1618862276691,\n \"duration_in_millis\" : 47779,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 9,\n \"failed\" : 0,\n \"successful\" : 9\n }\n }\n ]\n}<\/code><\/pre>\n\n\n\n
Restoring Elasticsearch Snapshot to another Cluster<\/h3>\n\n\n\n
curl -X POST \"192.168.59.12:9200\/_snapshot\/es_backup\/es_backup_202104192200\/_restore?pretty\"<\/code><\/pre>\n\n\n\n
{\n \"accepted\" : true\n}<\/code><\/pre>\n\n\n\n
Verify Indices<\/h3>\n\n\n\n
curl -XGET \"192.168.59.12:9200\/_cat\/indices?pretty\"<\/code><\/pre>\n\n\n\n
yellow open filebeat-7.10.1-2021.04.16-000001 SUcNGbsPRN6bvkPrAfEiPw 1 1 24 0 146kb 146kb\ngreen open .apm-custom-link 6O37J9vLS1eqplnEovhdaQ 1 0 0 0 208b 208b\nyellow open filebeat-7.12.0-2021.04.19-000001 4ElgYLt9Qceo73onTw-UqA 1 1 66423 0 15.5mb 15.5mb\ngreen open .kibana_task_manager_1 ueVULNo-R92kcXMrPQaeXg 1 0 5 1 98.8kb 98.8kb\ngreen open .apm-agent-configuration 39Qhl6AgTBmIvo4MOyK7_w 1 0 0 0 208b 208b\ngreen open .kibana-event-log-7.10.0-000001 sp0-b6FZTKK3gGHzVkJy8w 1 0 2 0 11kb 11kb\ngreen open .async-search -8UhlSbyS2Oyfs_BUA6OEg 1 0 0 0 231b 231b\ngreen open .kibana_1 C08RBXLhSG2NZ1scoxSb3w 1 0 1555 12 10.7mb 10.7mb<\/code><\/pre>\n\n\n\n
Reference<\/h3>\n\n\n\n
Other tutorials<\/h3>\n\n\n\n