# Version of Elastic products\nSTACK_VERSION=8.11.4\n\n# Set the cluster name\nCLUSTER_NAME=elk-docker-cluster\n\n# Set Elasticsearch Node Name\nNODE_NAME=es01\n\n# Elasticsearch super user\nES_USER=elastic\n\n# Password for the 'elastic' user (at least 6 characters). No special characters, ! or @ or $.\nELASTIC_PASSWORD=ChangeME\n\n# Elasticsearch container name\nES_NAME=elasticsearch\n\n# Port to expose Elasticsearch HTTP API to the host\nES_PORT=9200\n#ES_PORT=127.0.0.1:9200\n\n# Port to expose Kibana to the host\nKIBANA_PORT=5601\nKIBANA_SERVER_HOST=localhost\n\n# Kibana Encryption. Requires atleast 32 characters. Can be generated using `openssl rand -hex 16`\nSAVEDOBJECTS_ENCRYPTIONKEY=ca11560aec8410ff002d011c2a172608\nREPORTING_ENCRYPTIONKEY=288f06b3a14a7f36dd21563d50ec76d4\nSECURITY_ENCRYPTIONKEY=62c781d3a2b2eaee1d4cebcc6bf42b48\n\n# Kibana - Elasticsearch Authentication Credentials for user kibana_system\n# Password for the 'kibana_system' user (at least 6 characters). No special characters, ! or @ or $.\nKIBANA_USERNAME=kibana_system\nKIBANA_PASSWORD=ChangeME\n\n# Domain Suffix for ES Wildcard SSL certs\nDOMAIN_SUFFIX=kifarunix-demo.com\n\n# Elasticsearch Certificate Validity Period\nDAYS=3650\n\n<\/code><\/pre>\n\n\n\nDefine Logstash Data Processing Pipeline<\/h4>\n\n\n\n
In this setup, we will configure Logstash to receive event data from Beats (Filebeat to be specific) for further processing and stashing onto the search analytics engine, Elasticsearch.<\/p>\n\n\n\n
Note that Logstash is only necessary if you need to apply further processing to your event data. For example, extracting custom fields from the event data, mutating the event data etc. Otherwise, you can push the data directly to Elasticsearch from Beats.<\/p>\n\n\n\n
So we use a sample Logstash processing pipeline for ModSecurity audit logs;<\/p>\n\n\n\n
mkdir -p logstash\/conf.d<\/pre>\n\n\n\nvim logstash\/conf.d\/modsec.conf<\/pre>\n\n\n\n\ninput {\n beats {\n port => 5044\n }\n}\nfilter {\n # Extract event time, log severity level, source of attack (client), and the alert message.\n grok {\n match => { \"message\" => \"(?<event_time>%{MONTH}\\s%{MONTHDAY}\\s%{TIME}\\s%{YEAR})\\] \\[\\:%{LOGLEVEL:log_level}.*client\\s%{IPORHOST:src_ip}:\\d+]\\s(?<alert_message>.*)\" }\n }\n # Extract Rules File from Alert Message\n grok {\n match => { \"alert_message\" => \"(?<rulesfile>\\[file \\\"(\/.+.conf)\\\"\\])\" }\n }\t\n grok {\n match => { \"rulesfile\" => \"(?<rules_file>\/.+.conf)\" }\n }\t\n # Extract Attack Type from Rules File\n grok {\n match => { \"rulesfile\" => \"(?<attack_type>[A-Z]+-[A-Z][^.]+)\" }\n }\t\n # Extract Rule ID from Alert Message\n grok {\n match => { \"alert_message\" => \"(?<ruleid>\\[id \\\"(\\d+)\\\"\\])\" }\n }\t\n grok {\n match => { \"ruleid\" => \"(?<rule_id>\\d+)\" }\n }\n # Extract Attack Message (msg) from Alert Message \t\n grok {\n match => { \"alert_message\" => \"(?<msg>\\[msg \\S(.*?)\\\"\\])\" }\n }\t\n grok {\n match => { \"msg\" => \"(?<alert_msg>\\\"(.*?)\\\")\" }\n }\n # Extract the User\/Scanner Agent from Alert Message\t\n grok {\n match => { \"alert_message\" => \"(?<scanner>User-Agent' \\SValue: `(.*?)')\" }\n }\t\n grok {\n match => { \"scanner\" => \"(?<user_agent>:(.*?)\\')\" }\n }\t\n grok {\n match => { \"alert_message\" => \"(?<agent>User-Agent: (.*?)\\')\" }\n }\t\n grok {\n match => { \"agent\" => \"(?<user_agent>: (.*?)\\')\" }\n }\t\n # Extract the Target Host\n grok {\n match => { \"alert_message\" => \"(hostname \\\"%{IPORHOST:dst_host})\" }\n }\t\n # Extract the Request URI\n grok {\n match => { \"alert_message\" => \"(uri \\\"%{URIPATH:request_uri})\" }\n }\n grok {\n match => { \"alert_message\" => \"(?<ref>referer: (.*))\" }\n }\t\n grok {\n match => { \"ref\" => \"(?<referer> (.*))\" }\n }\n mutate {\n # Remove unnecessary characters from the fields.\n gsub => [\n \"alert_msg\", \"[\\\"]\", \"\",\n \"user_agent\", \"[:\\\"'`]\", \"\",\n \"user_agent\", \"^\\s*\", \"\",\n \"referer\", \"^\\s*\", \"\"\n ]\n # Remove the Unnecessary fields so we can only remain with\n # General message, rules_file, attack_type, rule_id, alert_msg, user_agent, hostname (being attacked), Request URI and Referer. \n remove_field => [ \"alert_message\", \"rulesfile\", \"ruleid\", \"msg\", \"scanner\", \"agent\", \"ref\" ]\n }\t\n}\noutput {\n elasticsearch {\n hosts => [\"https:\/\/${ES_NAME}:9200\"]\n user => \"${ELASTICSEARCH_USERNAME}\"\n password => \"${ELASTICSEARCH_PASSWORD}\"\n ssl => true\n cacert => \"config\/certs\/ca\/ca.crt\"\n }\n}\n<\/code><\/pre>\n\n\n\nVerify Docker Compose File Syntax<\/h4>\n\n\n\n
Check Docker Compose file Syntax;<\/p>\n\n\n\n
docker-compose -f docker-compose.yml config<\/pre>\n\n\n\nIf there is any error, it will be printed. Otherwise, the Docker compose file contents are printed to standard output.<\/p>\n\n\n\n
If you are in the same directory where docker-compose.yml<\/code> file is located, simply run;<\/p>\n\n\n\ndocker-compose config<\/pre>\n\n\n\nDeploy ELK Stack 8 Using Docker Compose file<\/h4>\n\n\n\n
Everything is now setup and we are ready to build and start our Elastic Stack instances using the docker-compose up<\/code> command.<\/p>\n\n\n\nNavigate to the main directory where the Docker compose file is located. In my setup the directory is $HOME\/elastic-docker<\/strong><\/code>.<\/p>\n\n\n\ncd $HOME\/elastic-docker<\/strong><\/pre>\n\n\n\ndocker-compose up<\/pre>\n\n\n\n\nThe command creates and starts the containers in foreground.<\/p>\n<\/blockquote>\n\n\n\n
Sample output;<\/p>\n\n\n\n
...\nelasticsearch | {\"@timestamp\":\"2024-01-16T19:02:40.690Z\", \"log.level\": \"INFO\", \"current.health\":\"GREEN\",\"message\":\"Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).\",\"previous.health\":\"YELLOW\",\"reason\":\"shards started [[.security-7][0]]\" , \"ecs.version\": \"1.2.0\",\"service.name\":\"ES_ECS\",\"event.dataset\":\"elasticsearch.server\",\"process.thread.name\":\"elasticsearch[es01][masterService#updateTask][T#1]\",\"log.logger\":\"org.elasticsearch.cluster.routing.allocation.AllocationService\",\"elasticsearch.cluster.uuid\":\"lhhCqqyfRrOuJ5lIHuOCww\",\"elasticsearch.node.id\":\"Q9-_fRo7Q6awepd8PYGvnQ\",\"elasticsearch.node.name\":\"es01\",\"elasticsearch.cluster.name\":\"elk-docker-cluster\"}\n...\n...\nlogstash | [2024-01-16T19:03:03,172][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>\"main\", \"pipeline.workers\"=>4, \"pipeline.batch.size\"=>125, \"pipeline.batch.delay\"=>50, \"pipeline.max_inflight\"=>500, \"pipeline.sources\"=>[\"\/usr\/share\/logstash\/pipeline\/modsec.conf\"], :thread=>\"#\"}\nlogstash | [2024-01-16T19:03:03,927][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {\"seconds\"=>0.75}\nlogstash | [2024-01-16T19:03:03,939][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>\"0.0.0.0:5044\"}\nlogstash | [2024-01-16T19:03:03,944][INFO ][logstash.javapipeline ][main] Pipeline started {\"pipeline.id\"=>\"main\"}\nlogstash | [2024-01-16T19:03:03,962][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}\nlogstash | [2024-01-16T19:03:04,023][INFO ][org.logstash.beats.Server][main][5c2281c3c7dc3fa0cecb74e0eb418d31f5ca88d19e9d33bf9ac5902cf7ffec49] Starting server on port: 5044\n...\n...\n{\"log\":\"[2024-01-16T19:03:08.805+00:00][INFO ][plugins.alerting] Installing ILM policy .alerts-ilm-policy\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:08.80584097Z\"}\n{\"log\":\"[2024-01-16T19:03:08.807+00:00][INFO ][plugins.alerting] Installing component template .alerts-framework-mappings\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:08.807954546Z\"}\n{\"log\":\"[2024-01-16T19:03:08.809+00:00][INFO ][plugins.alerting] Installing component template .alerts-legacy-alert-mappings\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:08.809793742Z\"}\n{\"log\":\"[2024-01-16T19:03:08.826+00:00][INFO ][plugins.alerting] Installing component template .alerts-ecs-mappings\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:08.827201234Z\"}\n{\"log\":\"[2024-01-16T19:03:08.839+00:00][INFO ][plugins.ruleRegistry] Installing component template .alerts-technical-mappings\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:08.840094086Z\"}\n{\"log\":\"[2024-01-16T19:03:10.330+00:00][INFO ][http.server.Kibana] http server running at http:\/\/0.0.0.0:5601\\n\",\"stream\":\"stdout\",\"time\":\"2024-01-16T19:03:10.330814866Z\"}\n...\n<\/code><\/pre>\n\n\n\nWhen you stop the docker-compose up<\/code> command, all containers are stopped.<\/p>\n\n\n\nFrom another console, you can check running containers. Note that you can use docker-compose<\/code> command as you would docker<\/code> command. However, to use it, you need to be in the same directory as compose file or specify path using the -f<\/code> option..<\/p>\n\n\n\ndocker-compose ps<\/pre>\n\n\n\nNAME IMAGE COMMAND SERVICE CREATED STATUS PORTS\nelasticsearch docker.elastic.co\/elasticsearch\/elasticsearch:8.11.4 \"\/bin\/tini -- \/usr\/l\u2026\" elasticsearch 21 minutes ago Up 17 minutes (healthy) 0.0.0.0:9200->9200\/tcp, :::9200->9200\/tcp, 0.0.0.0:9300->9300\/tcp, :::9300->9300\/tcp\nkibana docker.elastic.co\/kibana\/kibana:8.11.4 \"\/bin\/tini -- \/usr\/l\u2026\" kibana 21 minutes ago Up 16 minutes 0.0.0.0:5601->5601\/tcp, :::5601->5601\/tcp\nlogstash docker.elastic.co\/logstash\/logstash:8.11.4 \"\/usr\/local\/bin\/dock\u2026\" logstash 21 minutes ago Up 16 minutes 0.0.0.0:5044->5044\/tcp, :::5044->5044\/tcp, 9600\/tcp\n<\/code><\/pre>\n\n\n\nFrom the output, you can see that the containers are running and their ports exposed on the host (any IP address) to allow external access.<\/p>\n\n\n\n
You can run the stack containers in background using the -d<\/strong><\/code> option. You can press ctrl+c<\/code> to cancel the command and stop the containers.<\/p>\n\n\n\nTo relaunch containers in background<\/p>\n\n\n\n
docker-compose up -d<\/pre>\n\n\n\n[+] Running 4\/4\n[+] Running 4\/4kstack-docker-es_setup-1 Healthy 0.0s \n \u2714 Container elkstack-docker-es_setup-1 Healthy 0.0s \n \u2714 Container elasticsearch Healthy 0.0s \n \u2714 Container logstash Started 0.0s \n \u2714 Container kibana Started\n<\/code><\/pre>\n\n\n\nYou can as well list the running containers using docker command;<\/p>\n\n\n\n
docker ps<\/pre>\n\n\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n40516edb2827 docker.elastic.co\/logstash\/logstash:8.11.4 \"\/usr\/local\/bin\/dock\u2026\" 24 minutes ago Up 22 seconds 0.0.0.0:5044->5044\/tcp, :::5044->5044\/tcp, 9600\/tcp logstash\n2efaeccc67a3 docker.elastic.co\/kibana\/kibana:8.11.4 \"\/bin\/tini -- \/usr\/l\u2026\" 24 minutes ago Up 22 seconds 0.0.0.0:5601->5601\/tcp, :::5601->5601\/tcp kibana\na3f453974592 docker.elastic.co\/elasticsearch\/elasticsearch:8.11.4 \"\/bin\/tini -- \/usr\/l\u2026\" 24 minutes ago Up 53 seconds (healthy) 0.0.0.0:9200->9200\/tcp, :::9200->9200\/tcp, 0.0.0.0:9300->9300\/tcp, :::9300->9300\/tcp elasticsearch\n<\/code><\/pre>\n\n\n\nTo find the details of each container, use docker inspect <container-name><\/strong><\/em><\/code> command. For example<\/p>\n\n\n\ndocker inspect elasticsearch<\/pre>\n\n\n\nTo get the logs of a container, use the command docker logs [OPTIONS] CONTAINER<\/code><\/strong><\/em>. For example, to get Elasticsearch container logs;<\/p>\n\n\n\ndocker logs elasticsearch<\/pre>\n\n\n\nIf you need to check specific number of logs, you can use the tail<\/strong><\/code> option. E.g to get the last 50 log lines;<\/p>\n\n\n\ndocker logs --tail 50 -f elasticsearch<\/pre>\n\n\n\nOr check the \/var\/lib\/docker\/containers\/<long-docker-id>\/<long-docker-id>.log<\/code><\/strong><\/p>\n\n\n\nAccessing Kibana Container from Browser<\/h2>\n\n\n\n
Once the stack is up and running, you can access Kibana externally using the host IP address and the port on which it is exposed on. In our setup, Kibana container port 5601 is exposed on the same port on the host;<\/p>\n\n\n\n
docker port kibana<\/pre>\n\n\n\n5601\/tcp -> 0.0.0.0:5601\n5601\/tcp -> [::]:5601<\/pre>\n\n\n\nThis means that you can access Kibana container port on via any interface on the host, port 5601. Similarly, you can check container port exposure using the command above.<\/p>\n\n\n\n
Therefore, you can access Kibana using your Container host address, http:\/\/<IP-Address>:5601<\/strong>.<\/p>\n\n\n\nLogin using the Elastic user. You can add other accounts thereafter.<\/p>\n\n\n\n