Logstash pipeline filters configuration<\/a> as we used in our previous guide.<\/p>\n\n\n\ncat elkstack-configs\/logstash\/modsec.conf<\/code><\/pre>\n\n\n\ninput {\n beats {\n port => 5044\n }\n}\nfilter {\n # Extract event time, log severity level, source of attack (client), and the alert message.\n grok {\n match => { \"message\" => \"(?%{MONTH}\\s%{MONTHDAY}\\s%{TIME}\\s%{YEAR})\\] \\[\\:%{LOGLEVEL:log_level}.*client\\s%{IPORHOST:src_ip}:\\d+]\\s(?.*)\" }\n }\n # Extract Rules File from Alert Message\n grok {\n match => { \"alert_message\" => \"(?\\[file \\\"(\/.+.conf)\\\"\\])\" }\n }\t\n grok {\n match => { \"rulesfile\" => \"(?\/.+.conf)\" }\n }\t\n # Extract Attack Type from Rules File\n grok {\n match => { \"rulesfile\" => \"(?[A-Z]+-[A-Z][^.]+)\" }\n }\t\n # Extract Rule ID from Alert Message\n grok {\n match => { \"alert_message\" => \"(?\\[id \\\"(\\d+)\\\"\\])\" }\n }\t\n grok {\n match => { \"ruleid\" => \"(?\\d+)\" }\n }\n # Extract Attack Message (msg) from Alert Message \t\n grok {\n match => { \"alert_message\" => \"(?\\[msg \\S(.*?)\\\"\\])\" }\n }\t\n grok {\n match => { \"msg\" => \"(?\\\"(.*?)\\\")\" }\n }\n # Extract the User\/Scanner Agent from Alert Message\t\n grok {\n match => { \"alert_message\" => \"(?User-Agent' \\SValue: `(.*?)')\" }\n }\t\n grok {\n match => { \"scanner\" => \"(?:(.*?)\\')\" }\n }\t\n grok {\n match => { \"alert_message\" => \"(?User-Agent: (.*?)\\')\" }\n }\t\n grok {\n match => { \"agent\" => \"(?: (.*?)\\')\" }\n }\t\n # Extract the Target Host\n grok {\n match => { \"alert_message\" => \"(hostname \\\"%{IPORHOST:dst_host})\" }\n }\t\n # Extract the Request URI\n grok {\n match => { \"alert_message\" => \"(uri \\\"%{URIPATH:request_uri})\" }\n }\n grok {\n match => { \"alert_message\" => \"(?[referer: (.*))\" }\n }\t\n grok {\n match => { \"ref\" => \"(? (.*))\" }\n }\n mutate {\n # Remove unnecessary characters from the fields.\n gsub => [\n \"alert_msg\", \"[\\\"]\", \"\",\n \"user_agent\", \"[:\\\"'`]\", \"\",\n \"user_agent\", \"^\\s*\", \"\",\n \"referer\", \"^\\s*\", \"\"\n ]\n # Remove the Unnecessary fields so we can only remain with\n # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. \n remove_field => [ \"alert_message\", \"rulesfile\", \"ruleid\", \"msg\", \"scanner\", \"agent\", \"ref\" ]\n }\t\n}\noutput {\n elasticsearch {\n hosts => [\"https:\/\/${ES_NAME}:9200\"]\n user => \"${ELASTICSEARCH_USERNAME}\"\n password => \"${ELASTICSEARCH_PASSWORD}\"\n ssl => true\n cacert => \"config\/certs\/ca\/ca.crt\"\n }\n}\n<\/code><\/pre>\n\n\n\n]Basically, we will have three ELK Stack nodes in the cluster and each running a single instance of Elasticsearch, Logstash, Kibana containers.<\/p>\n\n\n\n
The variables used in Elastic configs will be defined in the Docker compose environment variables file.<\/p>\n\n\n\n
We also have two compose files:<\/p>\n\n\n\n
\n- The compose file required to setup and start ELK stack container on the first node. Will generate the required SSL certs and mount on the NFS share volumes for use by other ELK stack on the other two nodes.<\/li>\n\n\n\n
- The second compose file to start the two other ELK stack containers and join them to the first ELK stack to make a cluster.<\/li>\n<\/ul>\n\n\n\n
Create Initial ELK Stack 8 Cluster Node Docker compose file;<\/p>\n\n\n\n
cat elkstack-configs\/docker-compose-v1.yml<\/code><\/pre>\n\n\n\nversion: '3.8'\n\nservices:\n es_setup:\n image: docker.elastic.co\/elasticsearch\/elasticsearch:${STACK_VERSION}\n volumes:\n - certs:\/usr\/share\/elasticsearch\/${CERTS_PATH}\n user: \"0\"\n command: >\n bash -c '\n echo \"Creating ES certs directory...\"\n [[ -d ${CERTS_PATH} ]] || mkdir ${CERTS_PATH}\n # Check if CA certificate exists\n if [ ! -f ${CERTS_PATH}\/ca\/ca.crt ]; then\n echo \"Generating Wildcard SSL certs for ES (in PEM format)...\"\n bin\/elasticsearch-certutil ca --pem --days 3650 --out ${CERTS_PATH}\/elkstack-ca.zip\n unzip -d ${CERTS_PATH} ${CERTS_PATH}\/elkstack-ca.zip\n bin\/elasticsearch-certutil cert \\\n --name elkstack-certs \\\n --ca-cert ${CERTS_PATH}\/ca\/ca.crt \\\n --ca-key ${CERTS_PATH}\/ca\/ca.key \\\n --pem \\\n --dns \"*.${DOMAIN_SUFFIX},localhost,${NODE01_NAME},${NODE02_NAME},${NODE03_NAME}\" \\\n --ip ${NODE01_IP} \\\n --ip ${NODE02_IP} \\\n --ip ${NODE03_IP} \\\n --days ${DAYS} \\\n --out ${CERTS_PATH}\/elkstack-certs.zip\n unzip -d ${CERTS_PATH} ${CERTS_PATH}\/elkstack-certs.zip\n else\n echo \"CA certificate already exists. Skipping Certificates generation.\"\n fi\n # Check if Elasticsearch is ready\n until curl -s --cacert ${CERTS_PATH}\/ca\/ca.crt -u \"elastic:${ELASTIC_PASSWORD}\" https:\/\/${NODE01_NAME}:9200 | grep -q \"${CLUSTER_NAME}\"; do sleep 10; done\n # Set kibana_system password\n if curl -sk -XGET --cacert ${CERTS_PATH}\/ca\/ca.crt \"https:\/\/${NODE01_NAME}:9200\" -u \"kibana_system:${KIBANA_PASSWORD}\" | grep -q \"${CLUSTER_NAME}\"; then\n echo \"Password for kibana_system is working. Proceeding with Elasticsearch setup for kibana_system.\"\n else\n echo \"Failed to authenticate with kibana_system password. Trying to set the password for kibana_system.\"\n until curl -s -XPOST --cacert ${CERTS_PATH}\/ca\/ca.crt -u \"elastic:${ELASTIC_PASSWORD}\" -H \"Content-Type: application\/json\" https:\/\/${NODE01_NAME}:9200\/_security\/user\/kibana_system\/_password -d \"{\\\"password\\\":\\\"${KIBANA_PASSWORD}\\\"}\" | grep -q \"^{}\"; do sleep 10; done\n fi\n echo \"Setup is done!\"\n '\n networks:\n - elastic\n healthcheck:\n test: [\"CMD-SHELL\", \"[ -f ${CERTS_PATH}\/elkstack-certs\/elkstack-certs.crt ]\"]\n interval: 1s\n timeout: 5s\n retries: 120\n\n elasticsearch:\n depends_on:\n es_setup:\n condition: service_healthy\n container_name: ${NODE01_NAME}\n image: docker.elastic.co\/elasticsearch\/elasticsearch:${STACK_VERSION}\n environment:\n - node.name=${NODE01_NAME}\n - network.publish_host=${NODE01_IP} \n - cluster.name=${CLUSTER_NAME}\n - bootstrap.memory_lock=true\n - \"ES_JAVA_OPTS=-Xms1g -Xmx1g\"\n - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}\n - xpack.security.enabled=true\n - xpack.security.http.ssl.enabled=true\n - xpack.security.transport.ssl.enabled=true\n - xpack.security.enrollment.enabled=false\n - xpack.security.autoconfiguration.enabled=false\n - xpack.security.http.ssl.key=certs\/elkstack-certs\/elkstack-certs.key\n - xpack.security.http.ssl.certificate=certs\/elkstack-certs\/elkstack-certs.crt\n - xpack.security.http.ssl.certificate_authorities=certs\/ca\/ca.crt\n - xpack.security.transport.ssl.key=certs\/elkstack-certs\/elkstack-certs.key\n - xpack.security.transport.ssl.certificate=certs\/elkstack-certs\/elkstack-certs.crt\n - xpack.security.transport.ssl.certificate_authorities=certs\/ca\/ca.crt\n - cluster.initial_master_nodes=${NODE01_NAME},${NODE02_NAME},${NODE03_NAME}\n - discovery.seed_hosts=${NODE01_IP},${NODE02_IP},${NODE03_IP}\n - KIBANA_USERNAME=${KIBANA_USERNAME}\n - KIBANA_PASSWORD=${KIBANA_PASSWORD}\n ulimits:\n memlock:\n soft: -1\n hard: -1\n volumes:\n - elasticsearch_data:\/usr\/share\/elasticsearch\/data\n - certs:\/usr\/share\/elasticsearch\/${CERTS_PATH}\n - \/etc\/hosts:\/etc\/hosts\n ports:\n - ${ES_PORT}:9200\n - ${ES_TS_PORT}:9300\n networks:\n - elastic\n healthcheck:\n test: [\"CMD-SHELL\", \"curl --fail -k -s -u elastic:${ELASTIC_PASSWORD} --cacert ${CERTS_PATH}\/ca\/ca.crt https:\/\/${NODE01_NAME}:9200\"]\n interval: 30s\n timeout: 10s\n retries: 5\n restart: unless-stopped\n\n kibana:\n image: docker.elastic.co\/kibana\/kibana:${STACK_VERSION}\n container_name: kibana\n environment:\n - SERVER_NAME=${KIBANA_SERVER_HOST}\n - ELASTICSEARCH_HOSTS=https:\/\/${NODE01_NAME}:9200\n - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=${CERTS_PATH}\/ca\/ca.crt\n - ELASTICSEARCH_USERNAME=${KIBANA_USERNAME}\n - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}\n - XPACK_REPORTING_ROLES_ENABLED=false\n - XPACK_REPORTING_KIBANASERVER_HOSTNAME=localhost\n - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${SAVEDOBJECTS_ENCRYPTIONKEY}\n - XPACK_SECURITY_ENCRYPTIONKEY=${REPORTING_ENCRYPTIONKEY}\n - XPACK_REPORTING_ENCRYPTIONKEY=${SECURITY_ENCRYPTIONKEY}\n volumes:\n - kibana_data:\/usr\/share\/kibana\/data\n - certs:\/usr\/share\/kibana\/${CERTS_PATH}\n - \/etc\/hosts:\/etc\/hosts\n ports:\n - ${KIBANA_PORT}:5601\n networks:\n - elastic\n depends_on:\n elasticsearch:\n condition: service_healthy\n restart: unless-stopped\n \n logstash:\n image: docker.elastic.co\/logstash\/logstash:${STACK_VERSION}\n container_name: logstash\n environment:\n - XPACK_MONITORING_ENABLED=false\n - ELASTICSEARCH_USERNAME=${ES_USER}\n - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}\n - NODE_NAME=${NODE01_NAME}\n - CERTS_PATH=${CERTS_PATH}\n ports:\n - ${BEATS_INPUT_PORT}:5044\n volumes:\n - logstash_filters:\/usr\/share\/logstash\/pipeline\/:ro\n - certs:\/usr\/share\/logstash\/${CERTS_PATH}\n - logstash_data:\/usr\/share\/logstash\/data\n - \/etc\/hosts:\/etc\/hosts\n networks:\n - elastic\n depends_on:\n elasticsearch:\n condition: service_healthy\n restart: unless-stopped\nvolumes:\n certs:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_CERTS}\"\n elasticsearch_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/elasticsearch\/01\"\n kibana_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/kibana\/01\"\n logstash_filters:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_CONFIGS}\/logstash\"\n logstash_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/logstash\/01\"\n\nnetworks:\n elastic:\n<\/code><\/pre>\n\n\n\nThe second compose file;<\/p>\n\n\n\n
cat elkstack-configs\/docker-compose-v2.yml<\/code><\/pre>\n\n\n\nservices:\n elasticsearch:\n container_name: ${NODENN_NAME}\n image: docker.elastic.co\/elasticsearch\/elasticsearch:${STACK_VERSION}\n command: >\n bash -c '\n until [ -f \"${CERTS_PATH}\/elkstack-certs\/elkstack-certs.crt\" ]; do\n sleep 10;\n done;\n exec \/usr\/local\/bin\/docker-entrypoint.sh\n '\n environment:\n - node.name=${NODENN_NAME}\n - network.publish_host=${NODENN_IP}\n - cluster.name=${CLUSTER_NAME}\n - bootstrap.memory_lock=true\n - cluster.initial_master_nodes=${NODE01_NAME},${NODE02_NAME},${NODE03_NAME}\n - discovery.seed_hosts=${NODE01_IP},${NODE02_IP},${NODE03_IP}\n - \"ES_JAVA_OPTS=-Xms1g -Xmx1g\"\n - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}\n - xpack.security.enabled=true\n - xpack.security.http.ssl.enabled=true\n - xpack.security.enrollment.enabled=false\n - xpack.security.autoconfiguration.enabled=false \n - xpack.security.transport.ssl.enabled=true\n - xpack.security.http.ssl.key=certs\/elkstack-certs\/elkstack-certs.key\n - xpack.security.http.ssl.certificate=certs\/elkstack-certs\/elkstack-certs.crt\n - xpack.security.http.ssl.certificate_authorities=certs\/ca\/ca.crt\n - xpack.security.transport.ssl.key=certs\/elkstack-certs\/elkstack-certs.key\n - xpack.security.transport.ssl.certificate=certs\/elkstack-certs\/elkstack-certs.crt\n - xpack.security.transport.ssl.certificate_authorities=certs\/ca\/ca.crt\n - KIBANA_USERNAME=${KIBANA_USERNAME}\n - KIBANA_PASSWORD=${KIBANA_PASSWORD}\n ulimits:\n memlock:\n soft: -1\n hard: -1\n volumes:\n - elasticsearch_data:\/usr\/share\/elasticsearch\/data\n - certs:\/usr\/share\/elasticsearch\/${CERTS_PATH}\n - \/etc\/hosts:\/etc\/hosts\n ports:\n - ${ES_PORT}:9200\n - ${ES_TS_PORT}:9300\n networks:\n - elastic\n healthcheck:\n test: [\"CMD-SHELL\", \"curl --fail -k -s -u elastic:${ELASTIC_PASSWORD} --cacert ${CERTS_PATH}\/ca\/ca.crt https:\/\/${NODENN_NAME}:9200\"]\n interval: 30s\n timeout: 10s\n retries: 5\n restart: unless-stopped\n\n kibana:\n image: docker.elastic.co\/kibana\/kibana:${STACK_VERSION}\n container_name: kibana\n environment:\n - SERVER_NAME=${KIBANA_SERVER_HOST}\n - ELASTICSEARCH_HOSTS=https:\/\/${NODENN_NAME}:9200\n - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=${CERTS_PATH}\/ca\/ca.crt\n - ELASTICSEARCH_USERNAME=${KIBANA_USERNAME}\n - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}\n - XPACK_REPORTING_ROLES_ENABLED=false\n - XPACK_REPORTING_KIBANASERVER_HOSTNAME=localhost\n - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${SAVEDOBJECTS_ENCRYPTIONKEY}\n - XPACK_SECURITY_ENCRYPTIONKEY=${REPORTING_ENCRYPTIONKEY}\n - XPACK_REPORTING_ENCRYPTIONKEY=${SECURITY_ENCRYPTIONKEY}\n volumes:\n - kibana_data:\/usr\/share\/kibana\/data\n - certs:\/usr\/share\/kibana\/${CERTS_PATH}\n - \/etc\/hosts:\/etc\/hosts\n ports:\n - ${KIBANA_PORT}:5601\n networks:\n - elastic\n depends_on:\n elasticsearch:\n condition: service_healthy\n restart: unless-stopped\n \n logstash:\n image: docker.elastic.co\/logstash\/logstash:${STACK_VERSION}\n container_name: logstash\n environment:\n - XPACK_MONITORING_ENABLED=false\n - ELASTICSEARCH_USERNAME=${ES_USER}\n - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}\n - NODE_NAME=${NODENN_NAME}\n - CERTS_PATH=${CERTS_PATH}\n ports:\n - ${BEATS_INPUT_PORT}:5044\n volumes:\n - certs:\/usr\/share\/logstash\/${CERTS_PATH}\n - logstash_filters:\/usr\/share\/logstash\/pipeline\/:ro\n - logstash_data:\/usr\/share\/logstash\/data \n - \/etc\/hosts:\/etc\/hosts\n networks:\n - elastic\n depends_on:\n elasticsearch:\n condition: service_healthy\n restart: unless-stopped\n\nvolumes:\n certs:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_CERTS}\"\n elasticsearch_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/elasticsearch\/NN\"\n kibana_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/kibana\/NN\"\n logstash_filters:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_CONFIGS}\/logstash\"\n logstash_data:\n driver: local\n driver_opts:\n type: nfs\n o: \"addr=${NFS_SVR_IP},nfsvers=4,rw\"\n device: \":${NFS_ELK_DATA}\/logstash\/NN\"\n\nnetworks:\n elastic:\n<\/code><\/pre>\n\n\n\nThe variables are defined in the Docker compose environment variable;<\/p>\n\n\n\n
cat elkstack-configs\/.env<\/code><\/pre>\n\n\n\n# Version of Elastic products\nSTACK_VERSION=8.12.0\n\n# Set the cluster name\nCLUSTER_NAME=elk-docker-cluster\n\n# Set Elasticsearch Node Name\nNODE01_NAME=es01\nNODE02_NAME=es02\nNODE03_NAME=es03\n\n# Docker Host IP to advertise to cluster nodes\nNODE01_IP=192.168.122.60\nNODE02_IP=192.168.122.123\nNODE03_IP=192.168.122.152\n\n# Elasticsearch super user\nES_USER=elastic\n\n# Password for the 'elastic' user (at least 6 characters). No special characters, ! or @ or $.\nELASTIC_PASSWORD=ChangeME\n\n# Elasticsearch container name\nES_NAME=elasticsearch\n\n# Port to expose Elasticsearch HTTP API to the host\nES_PORT=9200\n#ES_PORT=127.0.0.1:9200\nES_TS_PORT=9300\n\n# Port to expose Kibana to the host\nKIBANA_PORT=5601\nKIBANA_SERVER_HOST=0.0.0.0\n\n# Kibana Encryption. Requires atleast 32 characters. Can be generated using `openssl rand -hex 16\nSAVEDOBJECTS_ENCRYPTIONKEY=ca11560aec8410ff002d011c2a172608\nREPORTING_ENCRYPTIONKEY=288f06b3a14a7f36dd21563d50ec76d4\nSECURITY_ENCRYPTIONKEY=62c781d3a2b2eaee1d4cebcc6bf42b48\n\n# Kibana - Elasticsearch Authentication Credentials for user kibana_system\n# Password for the 'kibana_system' user (at least 6 characters). No special characters, ! or @ or $.\nKIBANA_USERNAME=kibana_system\nKIBANA_PASSWORD=ChangeME\n\n# Domain Suffix for ES Wildcard SSL certs\nDOMAIN_SUFFIX=kifarunix-demo.com\n\n# Generated Certs Validity Period\nDAYS=3650\n\n# SSL\/TLS Certs Directory\nCERTS_PATH=config\/certs\n\n# Logstash Input Port\nBEATS_INPUT_PORT=5044\n\n# NFS Server\nNFS_SVR_IP=192.168.122.47\nNFS_ELK_CERTS=\/mnt\/elkstack\/certs\nNFS_ELK_DATA=\/mnt\/elkstack\/data\nNFS_ELK_CONFIGS=\/mnt\/elkstack\/configs\n<\/code><\/pre>\n\n\n\nDocker environment variables will be same across the entire cluster. We will however replace NODENN with respective node variable. For example, on second node, NODENN will change to NODE02 and NODE03 on the third node. We also mount the NFS share data path based on each node’s number, signified by NN, the container is running on and hence, on node02, NN=02 and on node03, NN=03.<\/p>\n\n\n\n
Copy ELK Stack Configs and Docker Compose Files to Respective Nodes<\/h4>\n\n\n\n
Now, we need to copy the Docker compose files, environment variables and configs to the respective nodes.<\/p>\n\n\n\n
cat roles\/copy-docker-compose\/tasks\/main.yml<\/code><\/pre>\n\n\n\n---\n- name: Create directory for ELK Stack Docker Compose files\n file:\n path: \"{{ elkstack_base_path }}\"\n state: directory\n when: inventory_hostname in groups['elk-stack']\n\n- name: Create Docker-Compose file for ELK Node 01\n copy:\n src: \"{{ src_elkstack_configs }}\/docker-compose-v1.yml\"\n dest: \"{{ elkstack_base_path }}\/docker-compose.yml\"\n when: ansible_host == 'node01'\n \n- name: Create Docker-Compose file for ELK Node 02\/03\n copy:\n src: \"{{ src_elkstack_configs }}\/docker-compose-v2.yml\"\n dest: \"{{ elkstack_base_path }}\/docker-compose.yml\"\n when: ansible_host in [\"node02\",\"node03\"]\n \n- name: Copy Environment Variables\n copy:\n src: \"{{ src_elkstack_configs }}\/.env\"\n dest: \"{{ elkstack_base_path }}\"\n when: inventory_hostname in groups['elk-stack']\n\n- name: Update the NODENN variable accordingly\n replace:\n path: \"{{ elkstack_base_path }}\/docker-compose.yml\"\n regexp: 'NN'\n replace: \"{{ '02' if ansible_host == 'node02' else ('03' if ansible_host == 'node03') }}\"\n when: \"'node02' in ansible_host or 'node03' in ansible_host\"\n<\/code><\/pre>\n\n\n\nDeploying ELK Stack 8 Cluster on Docker using Ansible<\/h4>\n\n\n\n
Next, create a task to deploy ELK stack 8 cluster on Docker.<\/p>\n\n\n\n
We are using docker compose to build the cluster containers.<\/p>\n\n\n\n
cat roles\/deploy-elk-cluster\/tasks\/main.yml<\/code><\/pre>\n\n\n\n---\n- name: Deploy ELK Stack on Docker using Docker Compose\n command: docker compose up -d\n args:\n chdir: \"{{elkstack_base_path}}\/\"\n when: inventory_hostname in groups['elk-stack']\n<\/code><\/pre>\n\n\n\n