{"id":9429,"date":"2021-07-01T23:09:13","date_gmt":"2021-07-01T20:09:13","guid":{"rendered":"https:\/\/kifarunix.com\/?p=9429"},"modified":"2024-03-18T19:52:42","modified_gmt":"2024-03-18T16:52:42","slug":"install-elk-stack-on-rocky-linux-8","status":"publish","type":"post","link":"https:\/\/kifarunix.com\/install-elk-stack-on-rocky-linux-8\/","title":{"rendered":"Install ELK Stack on Rocky Linux 8"},"content":{"rendered":"\n
Welcome to our demo on how to install ELK Stack<\/a> on Rocky Linux 8.<\/p>\n\n\n\n ELK is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana<\/a>. Elasticsearch is a search and analytics engine. Logstash is a server\u2011side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a \u201cstash\u201d like Elasticsearch. Kibana lets users visualize data in Elasticsearch with charts and graphs.<\/p>\n\n\n\n The order of installation of the Elastic Stack components is of great importance. It usually takes the order Elasticsearch > Kibana > Logstash > Beats<\/strong>. Also note that all components should be of the same versions.<\/p>\n\n\n\n To install Elastic Stack components on Rocky Linux 8 system, you can choose to create the Elastic RPM repo or install each component using their respective RPM binary.<\/p>\n\n\n\n We use the Elastic repositories in this guide.<\/p>\n\n\n\n Elastic Stack version 7.x repo can be created by running the command below.<\/p>\n\n\n\n Run system package update.<\/p>\n\n\n\n You can install Elasticsearch on Rocky Linux 8 from the created Elastic RPM repos<\/p>\n\n\n\n Out of the box, Elasticsearch works well with the default configuration options. In this demo, we are going to however make a few changes as per Important Elasticsearch Configurations<\/a>.<\/p>\n\n\n\n Set the Elasticsearch bind address to a specific IP if you need to enable remote access either from Kibana or Logstash or from Beats. Replace the IP, 192.168.56.154, with your appropriate server IP address<\/strong>.<\/p>\n\n\n\n You can as well leave the default settings to only allow local access to Elasticsearch.<\/p>\n\n\n\n When configured to listen on a non-loopback interface, Elasticsearch expects to join a cluster<\/a> when started. But since we are setting up a single node Elastic Stack, you need to define in the ES configuration that this is a single node setup, by entering the line, Next, configure JVM heap size to no more than half the size of your memory. In this case, our test server has 2G RAM and the heap size is set to 512M for both maximum and minimum sizes.<\/p>\n\n\n\n Start and enable ES to run on system boot.<\/p>\n\n\n\n Verify that Elasticsearch is running as expected.<\/p>\n\n\n\n The next Elastic Stack component to install is Kabana. Since we already created the Elastic Stack repos, you can simply run the command below to install it.<\/p>\n\n\n\n To begin with, you need to configure Kibana to allow remote access. It usually only allows local access on port 5601\/tcp by default. Hence, open the Kibana configuration file for editing and uncomment and change the following lines;<\/p>\n\n\n\n Such that it look like as shown below:<\/p>\n\n\n\n Replace the IP addresses of Kibana and Elasticsearch accordingly. Note that in this demo, All Elastic Stack components are running on the same host.<\/strong><\/p>\n\n\n\n Start and enable Kibana to run on system boot.<\/p>\n\n\n\n Checking the status;<\/p>\n\n\n\n Open Kibana Port on FirewallD, if it is running;<\/p>\n\n\n\n You can now access Kibana from your browser by using the URL, On Kibana web interface, you can choose to try sample data since there is no data being sent to Elasticsearch yet. You can as well choose to explore your own data, but of course after sending data to ES.<\/p>\n\n\n\n This is optional, by the way<\/p>\n\n\n\n Logstash is the component of Elastic Stack that does further processing of the event data before sending it to the Elasticsearch data store. For example, you can develop custom regex, grok patterns to extract specific fields from the event data.<\/p>\n\n\n\n It is also possible to directly sent the data to Elasticsearch instead of passing them through Logstash.<\/p>\n\n\n\n To install Logstash on Rocky Linux 8.<\/p>\n\n\n\n Once the installation of Logstash is done, you can verify that it is ready to process event data by running the basic pipeline command as shown below;<\/p>\n\n\n\n Press ENTER to execute the command and wait for the Pipeline to be ready to receive input data (The stdin plugin is now waiting for input:<\/strong>).<\/p>\n\n\n\n Type any string, for example, testing Logstash pipeline<\/strong> and press ENTER.<\/p>\n\n\n\n Logstash process the input data and adds timestamp and host address information to the message.<\/p>\n\n\n\n You can stop Logstash pipeline by pressing Ctrl+D.<\/p>\n\n\n\n Logstash is now ready to receive and process data. In this demo, we are going to learn how to configure Logstash pipeline to collect events from a local system.<\/p>\n\n\n\n Logstash pipeline is made up of three sections;<\/p>\n\n\n\n You can configure beats to sent data to Logstash or simply read local files on the system. To collect events from the local system, we are going to use the file input plugin<\/a>.<\/p>\n\n\n\n There are multiple input plugins you can use, check them on Logstash Input Plugins<\/a>.<\/p>\n\n\n\n In this demo, we are using a single configuration file to define the pipeline components; input, filters, output.<\/p>\n\n\n\n Configure Logstash filter to extract only relevant events such as the lines below from the We are using Grok Filters to process extract these lines from the log file;<\/p>\n\n\n\n Note the grok filter used above will just capture password based and public key SSH logins. <\/strong>You can use the Grok Debugger on Kibana to test your pattern, Dev-tools > Grok Debugger<\/strong>.<\/p>\n\n\n\n Also, note the filter;<\/p>\n\n\n\n For the purpose of demo and to capture only the logs we need for this demo, that line basically drops any other event log not containing the specified keywords above.<\/p>\n\n\n\n Next, we are going to sent our processed data to Elasticsearch running on the localhost.<\/p>\n\n\n\n Define Elasticsearch output.<\/p>\n\n\n\n Ensure that Logstash can read the file being monitored, Before you can sent the data to Elasticsearch, you need to verify the grok filters. Follow our guide below to learn how to debug grok patterns.<\/p>\n\n\n\n How to Debug Logstash Grok Filters<\/a><\/p>\n\n\n\n After the configurations, run the command below to verify the Logstash configuration before you can start it.<\/p>\n\n\n\n Configuration OK<\/strong> confirms that there is no error in the configuration file.<\/p>\n\n\n\n If you need to debug a specific Logstash pipeline configuration file, you can execute the command below. Replace the path to config file with your file path. Ensure logstash is not running when executing this command.<\/p>\n\n\n\n You can start and enable Logstash to run on system boot.<\/p>\n\n\n\n If for some weird reasons Logstash did not generate the systemd service file with the symptom: You can the start and enable it to run on boot as shown above.<\/p>\n\n\n\n To check the status;<\/p>\n\n\n\n Perform some authentication to your system and head back to Kibana Interface > Management > Stack Management > Data > Index Management<\/strong>.<\/p>\n\n\n\n If your events have been received and forward to Elasticsearch, the defined index should now have been created.<\/p>\n\n\n\n To visualize and explore data in Kibana, you need to create an index pattern to retrieve data from Elasticsearch.<\/p>\n\n\n\n In this demo, our index pattern is On Kibana Dashboard, Navigate to Management > Stack Management > Kibana > Index Patterns > Create index pattern<\/strong>.<\/p>\n\n\n\n An index pattern can match the name of a single index, or include a wildcard (*) to match multiple indices.<\/p>\n\n\n\n Click Next and select @timestamp<\/strong> as the time filter and click Create index pattern<\/strong>.<\/p>\n\n\n\n After that, click Discover tab<\/strong> on the left pane to view the data. Expand time range appropriately.<\/p>\n\n\n\n You can as well select the fields that you want to view based on your grok pattern. In the screenshot below, time range is last 15 minutes.<\/p>\n\n\n\n Be sure to customize your Grok patterns to your liking if using Logstash as your data processing engine.<\/p>\n\n\n\n Reference;<\/p>\n\n\n\n Installing Elastic Stack<\/a><\/p>\n\n\n\n Integrate Wazuh Manager with ELK Stack<\/a><\/p>\n\n\n\n Configure ELK Stack Alerting with ElastAlert<\/a><\/p>\n\n\n\nInstalling ELK Stack on Rocky Linux 8<\/h2>\n\n\n\n
Creating Elastic Stack RPM Repo on Rocky Linux 8<\/h3>\n\n\n\n
\ncat > \/etc\/yum.repos.d\/elasticstack.repo << EOL\n[elasticsearch]\nname=Elasticsearch repository for 7.x packages\nbaseurl=https:\/\/artifacts.elastic.co\/packages\/7.x\/yum\ngpgcheck=1\ngpgkey=https:\/\/artifacts.elastic.co\/GPG-KEY-elasticsearch\nenabled=1\nautorefresh=1\ntype=rpm-md\nEOL\n<\/code><\/pre>\n\n\n\n
dnf update<\/code><\/pre>\n\n\n\n
Install Elasticsearch on Rocky Linux 8<\/a><\/h3>\n\n\n\n
dnf install elasticsearch<\/code><\/pre>\n\n\n\n
Configuring Elasticsearch<\/h4>\n\n\n\n
sed -i 's\/#network.host: 192.168.0.1\/network.host: 192.168.60.19\/' \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
discovery.type: single-node<\/code><\/strong>, under discovery configuration options. However, you can skip this if your ES is listening on a loopback interface.<\/p>\n\n\n\n
vim \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
# --------------------------------- Discovery ----------------------------------\n#\n# Pass an initial list of hosts to perform discovery when this node is started:\n# The default list of hosts is [\"127.0.0.1\", \"[::1]\"]\n#\n#discovery.seed_hosts: [\"host1\", \"host2\"]\n#\n# Bootstrap the cluster using an initial set of master-eligible nodes:\n#\n#cluster.initial_master_nodes: [\"node-1\", \"node-2\"]\n# Single Node Discovery\ndiscovery.type: single-node<\/strong><\/code><\/pre>\n\n\n\n
vim \/etc\/elasticsearch\/jvm.options<\/code><\/pre>\n\n\n\n
...\n################################################################\n\n# Xms represents the initial size of total heap space\n# Xmx represents the maximum size of total heap space\n\n-Xms512m\n-Xmx512m<\/strong>\n...<\/code><\/pre>\n\n\n\n
systemctl daemon-reload<\/code><\/pre>\n\n\n\n
systemctl enable --now elasticsearch<\/code><\/pre>\n\n\n\n
curl -XGET 192.168.60.19:9200<\/code><\/pre>\n\n\n\n
\n{\n \"name\" : \"localhost.localdomain\",\n \"cluster_name\" : \"elasticsearch\",\n \"cluster_uuid\" : \"Ga4BA59FTcqPtAPzaUaezw\",\n \"version\" : {\n \"number\" : \"7.13.2\",\n \"build_flavor\" : \"default\",\n \"build_type\" : \"rpm\",\n \"build_hash\" : \"4d960a0733be83dd2543ca018aa4ddc42e956800\",\n \"build_date\" : \"2021-06-10T21:01:55.251515791Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"8.8.2\",\n \"minimum_wire_compatibility_version\" : \"6.8.0\",\n \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n<\/code><\/pre>\n\n\n\n
Install Kibana on Rocky Linux 8<\/a><\/h3>\n\n\n\n
dnf install kibana<\/code><\/pre>\n\n\n\n
Configuring Kibana<\/h4>\n\n\n\n
vim \/etc\/kibana\/kibana.yml<\/code><\/pre>\n\n\n\n
...\n#server.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\n#server.host: \"localhost\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\n#elasticsearch.hosts: [\"http:\/\/localhost:9200\"]<\/strong><\/code><\/pre>\n\n\n\n
...\nserver.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\nserver.host: \"192.168.60.19\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\nelasticsearch.hosts: [\"http:\/\/192.168.60.19:9200\"]<\/strong><\/code><\/pre>\n\n\n\n
systemctl enable --now kibana<\/code><\/pre>\n\n\n\n
\n\u25cf kibana.service - Kibana\n Loaded: loaded (\/etc\/systemd\/system\/kibana.service; disabled; vendor preset: disabled)\n Active: active (running) since Thu 2021-07-01 20:06:15 EAT; 9s ago\n Docs: https:\/\/www.elastic.co\n Main PID: 3594 (node)\n Tasks: 14 (limit: 4938)\n Memory: 114.1M\n CGroup: \/system.slice\/kibana.service\n \u251c\u25003594 \/usr\/share\/kibana\/bin\/..\/node\/bin\/node \/usr\/share\/kibana\/bin\/..\/src\/cli\/dist --logging.dest=\/var\/log\/kibana\/kibana.log --pid.file=\/run\/kibana\/kibana.pid\n \u2514\u25003606 \/usr\/share\/kibana\/node\/bin\/node --preserve-symlinks-main --preserve-symlinks \/usr\/share\/kibana\/src\/cli\/dist --logging.dest=\/var\/log\/kibana\/kibana.log --p>\n\nJul 01 20:06:15 localhost.localdomain systemd[1]: Started Kibana.\n<\/code><\/pre>\n\n\n\n
firewall-cmd --add-port=5601\/tcp --permanent<\/code><\/pre>\n\n\n\n
firewall-cmd --reload<\/code><\/pre>\n\n\n\n
Accessing Kibana Interface<\/h3>\n\n\n\n
http:\/\/kibana-server-hostname-OR-IP:5601<\/strong><\/code>.<\/p>\n\n\n\n
Install Logstash on Rocky Linux 8<\/h3>\n\n\n\n
dnf install logstash<\/code><\/pre>\n\n\n\n
Testing Logstash<\/h3>\n\n\n\n
\/usr\/share\/logstash\/bin\/\/logstash -e 'input { stdin { } } output { stdout {} }'<\/code><\/pre>\n\n\n\n
...\n[INFO ] 2021-07-01 20:50:03.470 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>\"99d0b8b2-3130-413d-bb95-5be99ac6e138\", :path=>\"\/usr\/share\/logstash\/data\/uuid\"}\n[INFO ] 2021-07-01 20:50:05.404 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}\n...\n...\n[INFO ] 2021-07-01 20:50:09.164 [[main]-pipeline-manager] javapipeline - Pipeline started {\"pipeline.id\"=>\"main\"}\n[INFO ] 2021-07-01 20:50:09.226 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}\nThe stdin plugin is now waiting for input:<\/strong><\/code><\/pre>\n\n\n\n
The stdin plugin is now waiting for input:\ntesting Logstash pipeline << PRESS ENTER AFTER THIS LINE<\/strong>\n{\n \"@version\" => \"1\",\n \"@timestamp\" => 2021-07-01T18:06:31.822Z,\n \"host\" => \"elk.kifarunix-demo.com\",\n \"message\" => \"testing Logstash pipeline\"\n}<\/code><\/pre>\n\n\n\n
Configuring Logstash to Collect and Sent Events to Elasticsearch<\/h3>\n\n\n\n
\n
Configure Logstash Input plugin<\/h4>\n\n\n\n
vim \/etc\/logstash\/conf.d\/local-ssh-events.conf<\/code><\/pre>\n\n\n\n
## Collect System Authentication events from \/var\/log\/secure<\/strong>\ninput {\n file {\n path => \"\/var\/log\/secure\"\n type => \"ssh_auth\"\n }<\/code><\/pre>\n\n\n\n
Configure Logstash Filter plugin<\/h4>\n\n\n\n
\/var\/log\/secure<\/strong><\/code> file.<\/p>\n\n\n\n
Jul 1 21:45:34 localhost sshd[13707]: Failed password for invalid user gentoo from 192.168.60.18 port 53092 ssh2\nJul 1 21:45:50 localhost sshd[13768]: Failed password for invalid user KIFARUNIX from 192.168.60.18 port 53094 ssh2\nJul 1 21:47:14 localhost sshd[13949]: Failed password for root from 192.168.60.18 port 53112 ssh2\nJul 1 21:47:17 localhost sshd[13949]: Accepted password for root from 192.168.60.18 port 53112 ssh2<\/code><\/pre>\n\n\n\n
vim \/etc\/logstash\/conf.d\/local-ssh-events.conf<\/code><\/pre>\n\n\n\n
\n## Collect System Authentication events from \/var\/log\/secure\ninput {\n file {\n path => \"\/var\/log\/secure\"\n type => \"ssh_auth\"\n }\n}\nfilter {\n if [type] == \"ssh_auth\" {\n grok {\n match => { \"message\" => \"%{SYSLOGTIMESTAMP:timestamp}\\s+%{IPORHOST:dst_host}\\s+%{WORD:syslog_program}\\[\\d+\\]:\\s+(?<status>.+)\\s+for\\s+%{USER:auth_user}\\s+from\\s+%{SYSLOGHOST:src_host}.*\" }\n add_field => { \"activity\" => \"SSH Logins\" }\n add_tag => \"linux_auth\"\n }\n grok {\n match => { \"message\" => \"%{SYSLOGTIMESTAMP:timestamp}\\s+%{IPORHOST:dst_host}\\s+%{WORD:syslog_program}\\[\\d+\\]:\\s+(?<status>.+)\\s+for\\s+invalid\\s+user\\s%{USER:auth_user_nonexist}\\s+from\\s+%{SYSLOGHOST:src_host}.*\" }\n add_field => { \"activity\" => \"SSH Logins\" }\n add_tag => \"linux_auth\"\n }\n }\n# Drop any message that doesn't contain the keywords below\n if [message] !~ \/(Failed password|Accepted password|Accepted publickey|for invalid)\/ { drop { } }\n}<\/b>\n<\/code><\/pre>\n\n\n\n
if [message] !~ \/(Failed password|Accepted password|Accepted publickey|for invalid)\/ { drop { }<\/strong><\/code><\/pre>\n\n\n\n
Configure Logstash Output Plugin<\/h4>\n\n\n\n
vim \/etc\/logstash\/conf.d\/local-ssh-events.conf<\/code><\/pre>\n\n\n\n
\n## Collect System Authentication events from \/var\/log\/secure\ninput {\n file {\n path => \"\/var\/log\/secure\"\n type => \"ssh_auth\"\n }\n}\nfilter {\n if [type] == \"ssh_auth\" {\n grok {\n match => { \"message\" => \"%{SYSLOGTIMESTAMP:timestamp}\\s+%{IPORHOST:dst_host}\\s+%{WORD:syslog_program}\\[\\d+\\]:\\s+(?<status>.+)\\s+for\\s+%{USER:auth_user}\\s+from\\s+%{SYSLOGHOST:src_host}.*\" }\n add_field => { \"activity\" => \"SSH Logins\" }\n add_tag => \"linux_auth\"\n }\n grok {\n match => { \"message\" => \"%{SYSLOGTIMESTAMP:timestamp}\\s+%{IPORHOST:dst_host}\\s+%{WORD:syslog_program}\\[\\d+\\]:\\s+(?<status>.+)\\s+for\\s+invalid\\s+user\\s%{USER:auth_user_nonexist}\\s+from\\s+%{SYSLOGHOST:src_host}.*\" }\n add_field => { \"activity\" => \"SSH Logins\" }\n add_tag => \"linux_auth\"\n }\n }\n# Drop any message that doesn't contain the keywords below\n if [message] !~ \/(Failed password|Accepted password|Accepted publickey|for invalid)\/ { drop { } }\n}\n## Send data to Elasticsearch on the localhost\noutput {\n elasticsearch {\n hosts => [\"192.168.60.19:9200\"]\n manage_template => false\n index => \"ssh_auth-%{+YYYY.MM}\"\n }\n}<\/b>\n<\/code><\/pre>\n\n\n\n
\/var\/log\/secure<\/code><\/strong>. By default, file is owned by root. Therefore, for logstash to be able to read the file, first change the group ownership to adm, add logstash to the group adm, and assign read access to logstash.<\/p>\n\n\n\n
chown :adm \/var\/log\/secure<\/code><\/pre>\n\n\n\n
usermod -aG adm logstash<\/code><\/pre>\n\n\n\n
chmod g+r \/var\/log\/secure<\/code><\/pre>\n\n\n\n
Verify Logstash Grok Filters<\/h4>\n\n\n\n
Verify Logstash Configuration<\/h4>\n\n\n\n
sudo -u logstash \/usr\/share\/logstash\/bin\/logstash --path.settings \/etc\/logstash -t<\/code><\/pre>\n\n\n\n
...\n[2021-07-01T21:50:50,644][INFO ][org.reflections.Reflections] Reflections took 36 ms to scan 1 urls, producing 24 keys and 48 values \nConfiguration OK<\/strong>\n[2021-07-01T21:50:52,005][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash\n...<\/code><\/pre>\n\n\n\n
sudo -u logstash \/usr\/share\/logstash\/bin\/logstash -f \/etc\/logstash\/conf.d\/local-ssh-events.conf<\/strong> --path.settings \/etc\/logstash\/<\/code><\/pre>\n\n\n\n
Running Logstash<\/h3>\n\n\n\n
systemctl enable --now logstash<\/code><\/pre>\n\n\n\n
(Failed to restart logstash.service: Unit logstash.service not found.)<\/code><\/strong>
Simply generate the service file by executing the command;<\/p>\n\n\n\n\/usr\/share\/logstash\/bin\/system-install \/etc\/logstash\/startup.options systemd<\/code><\/pre>\n\n\n\n
systemctl status logstash<\/code><\/pre>\n\n\n\n
\n\u25cf logstash.service - logstash\n Loaded: loaded (\/etc\/systemd\/system\/logstash.service; disabled; vendor preset: disabled)\n Active: active (running) since Thu 2021-07-01 21:53:56 EAT; 6s ago\n Main PID: 15139 (java)\n Tasks: 15 (limit: 23673)\n Memory: 311.8M\n CGroup: \/system.slice\/logstash.service\n \u2514\u250015139 \/usr\/share\/logstash\/jdk\/bin\/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.a>\n\nJul 01 21:53:56 elk.kifarunix-demo.com systemd[1]: Started logstash.\nJul 01 21:53:56 elk.kifarunix-demo.com logstash[15139]: Using bundled JDK: \/usr\/share\/logstash\/jdk\nJul 01 22:03:31 elk.kifarunix-demo.com logstash[16724]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be remove>\nJul 01 22:03:48 elk.kifarunix-demo.com logstash[16724]: Sending Logstash logs to \/var\/log\/logstash which is now configured via log4j2.properties\nJul 01 22:03:48 elk.kifarunix-demo.com logstash[16724]: [2021-07-01T22:03:48,814][INFO ][logstash.runner ] Log4j configuration path used is: \/etc\/logstash\/log4j2.>\nJul 01 22:03:48 elk.kifarunix-demo.com logstash[16724]: [2021-07-01T22:03:48,824][INFO ][logstash.runner ] Starting Logstash {\"logstash.version\"=>\"7.13.2\", \"jruby>\nJul 01 22:03:50 elk.kifarunix-demo.com logstash[16724]: [2021-07-01T22:03:50,369][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}\n<\/code><\/pre>\n\n\n\n
Confirm Data Reception on Elasticsearch Index<\/h4>\n\n\n\n
<\/figure>\n\n\n\n
Visualizing Data on Kibana<\/h3>\n\n\n\n
ssh_auth-*<\/code><\/strong>, as defined on the Logstash Elasticsearch output plugin,
index => \"ssh_auth-%{+YYYY.MM}\"<\/code><\/strong>.<\/p>\n\n\n\n
<\/figure>\n\n\n\n
<\/figure>\n\n\n\n
<\/figure>\n\n\n\n
Other Tutorials<\/h3>\n\n\n\n