{"id":6957,"date":"2020-08-19T18:58:15","date_gmt":"2020-08-19T15:58:15","guid":{"rendered":"https:\/\/kifarunix.com\/?p=6957"},"modified":"2024-03-14T22:26:07","modified_gmt":"2024-03-14T19:26:07","slug":"setup-kibana-elasticsearch-and-fluentd-on-centos-8","status":"publish","type":"post","link":"https:\/\/kifarunix.com\/setup-kibana-elasticsearch-and-fluentd-on-centos-8\/","title":{"rendered":"Setup Kibana Elasticsearch and Fluentd on CentOS 8"},"content":{"rendered":"\n
Hello there. In this tutorial, you will learn how to setup Kibana Elasticsearch and Fluentd on CentOS 8. Normally, you would setup Elasticsearch with Logstash, Kibana and beats. But in this setup, we will see how Fluentd can be used instead of Logstash and Beats to collect and ship logs to Elasticsearch, a search and analytics engine. So, what is Fluentd? Fluentd<\/a> “is an open source data collector for unified logging layer”<\/em>. It can act as a log Below are the the key features of Fluentd.<\/p>\n\n\n\n In order to setup Kibana, Elasticsearch and Fluentd, we will install and configure each component separately as follows.<\/p>\n\n\n\n Run the command below to create Elastic Stack version 7.x repo on CentOS 8.<\/p>\n\n\n\n Run system package update.<\/p>\n\n\n\n Install Elasticsearch on CentOS 8 from the Elastic repos;<\/p>\n\n\n\n Out of the box, Elasticsearch works well with the default configuration options. In this setup, we will make a few changes as per Important Elasticsearch Configurations<\/a>.<\/p>\n\n\n\n Set the Elasticsearch bind address to a specific system IP if you need to enable remote access either from Kibana. Replace the IP, 192.168.56.154, with your appropriate server IP address<\/strong>.<\/p>\n\n\n\n You can as well leave the default settings to only allow local access to Elasticsearch.<\/p>\n\n\n\n When configured to listen on a non-loopback interface, Elasticsearch expects to join a cluster<\/a>. But since we are setting up a single node Elastic Stack, you need to specify in the ES configuration that this is a single node setup, by entering the line, Next, configure JVM heap size to no more than half the size of your memory. In this case, our test server has 2G RAM and the heap size is set to 512M for both maximum and minimum sizes.<\/p>\n\n\n\n Start and enable ES to run on system boot.<\/p>\n\n\n\n Verify that Elasticsearch is running as expected.<\/p>\n\n\n\n The next Elastic Stack component to install is Kabana. Since we already created the Elastic Stack repos, you can simply run the command below to install it.<\/p>\n\n\n\n To begin with, you need to configure Kibana to allow remote access. By default, it allows local access on port 5601\/tcp. Hence, open the Kibana configuration file for editing and uncomment and change the following lines;<\/p>\n\n\n\n Such that it look like as shown below:<\/p>\n\n\n\n Replace the IP addresses of Kibana and Elasticsearch accordingly. Note that in this demo, All Elastic Stack components are running on the same host.<\/strong><\/p>\n\n\n\n Start and enable Kibana to run on system boot.<\/p>\n\n\n\n Open Kibana Port on FirewallD, if it is running;<\/p>\n\n\n\n You can now access Kibana from your browser by using the URL, On Kibana web interface, you can choose to try sample data since we do not have any data being sent to Elasticsearch yet. You can as well choose to explore your own data, of course after sending data to ES.<\/p>\n\n\n\n Next, install and configure Fluentd to collect logs into Elasticsearch. On the same server running Elasticsearch, we will install Fluentd aggregator so it can receive logs from the end point nodes using the Fluentd forwarder.<\/p>\n\n\n\n There are a number of requirements that you need to consider while setting up Fluentd.<\/p>\n\n\n\n You can set the max number to 65536 by editing the limits.conf file and adding the lines below;<\/p>\n\n\n\n Update the changes by rebooting your system or by just running the command below;<\/p>\n\n\n\n Fluentd installation has been made easier through the use of the To install When installed, td-agent installs a systemd service unit for managing it. You can therefore start and enable it to run on system boot by executing the command below;<\/p>\n\n\n\n To check the status;<\/p>\n\n\n\n In this setup, we will use Elasticsearch as our search and analytics engine and hence, all the data collected by the Fluentd. As such, install Fluentd Elasticsearch plugin.<\/p>\n\n\n\n Also, if you are gonna sent the logs to Fluentd via Internet, you need to install You can see a whole list of Fluentd plugins on list of plugins by category page<\/a>.<\/p>\n\n\n\n The default configuration file for Fluentd installed via the td-agent RPM, is First off, there are quite a number of input plugins<\/a> which Fluentd aggregator can use to accept\/receive data from the Fluentd forwarders.<\/p>\n\n\n\n In this setup, we are receiving logs via the Fluentd the Create a configuration backup;<\/p>\n\n\n\n Be sure to open this port on firewall.<\/p>\n\n\n\n Configure Fluentd to sent data to Elasticsearch via the elasticsearch Fluentd output<\/a> plugin.<\/p>\n\n\n\n The match directive wildcard is explained on the File syntax page<\/a>.<\/p>\n\n\n\n That is our modified Fluentd aggregator configuration file. You can adjust it to meet your requirements.<\/p>\n\n\n\n Restart Fluentd td-agent;<\/p>\n\n\n\n Now that the Kibana, Elasticsearch and Fluentd Aggregator is setup and ready to receive collected data from the remote end points, proceed to install the Fluentd forwarders to push the logs to the the Fluentd aggregator.<\/p>\n\n\n\n In this setup, we are using a remote CentOS 8 as the remote end point to collect logs from.<\/p>\n\n\n\n Ubuntu 20.04;<\/p>\n\n\n\n Ubuntu 18.04<\/p>\n\n\n\n For more systems installation, refer to Fluentd installation<\/a> page.<\/p>\n\n\n\n Similarly, make a copy of the configuration file.<\/p>\n\n\n\n In this setup, just as an example, we will collect the system authentication logs, We will use the tail input plugin<\/a> to read the log files by tailing them. Therefore, our input configuration looks like;<\/p>\n\n\n\n Next, configure how logs are shipped to Fluentd aggregator. In this setup, we utilize the forward output plugin<\/a> to sent the data to our log manager server running Elasticsearch, Kibana and Fluentd aggregator, listening on port 24224 TCP\/UDP.<\/p>\n\n\n\n In general, our Fluentd forwarder configuration looks like;<\/p>\n\n\n\n Save and exit the configuration file.<\/p>\n\n\n\n Next, give Fluentd read access to the authentication logs file or any log file being collected. By default, only root can read the logs;<\/p>\n\n\n\n To ensure that Fluentd can read this log file, give the group and world read permissions;<\/p>\n\n\n\n The permissions should now look like;<\/p>\n\n\n\n Next, start and enable Fluentd Forwarder to run on system boot;<\/p>\n\n\n\n Check the status;<\/p>\n\n\n\n If you tail the Fluentd forwarder logs, you should see that it starts to read the log file;<\/p>\n\n\n\n On the server running Elasticsearch, Kibana and Fluentd aggregator, you can check if any data is being received on the port 24224;<\/p>\n\n\n\n Perform failed and successful SSH authentication to your host running Fluentd forwarder. After that, check if your Elasticsearch index has been created. In this setup, we set our index prefix to fluentd, Once you confirm that the data has been received on Elasticsearch and written to your index, navigate to Kibana web interface, Click on Management tab (on the left side panel) > Kibana> Index Patterns > Create Index Pattern<\/strong>. Enter the wildcard for your index name.<\/p>\n\n\n\n In the next step, select timestamp<\/strong> as the time filter then click Create Index pattern<\/strong> to create your index pattern.<\/p>\n\n\n\n Once you have created Fluentd Kibana index, you can now view your event data on Kibana by clicking on the Discover<\/strong> tab on the left pane. Expand your time range accordingly.<\/p>\n\n\n\naggregator<\/code> (sits on the same server as Elasticsearch for example) and as a log
forwarder<\/code> (collecting logs from the nodes being monitored).<\/p>\n\n\n\n
<\/figure>\n\n\n\n
\n
Install and Configure Kibana, Elasticsearch and Fluentd<\/h2>\n\n\n\n
Creating Elastic Stack Repository on CentOS 8<\/a><\/h3>\n\n\n\n
cat > \/etc\/yum.repos.d\/elasticstack.repo << EOL\n[elasticstack]\nname=Elastic repository for 7.x packages\nbaseurl=https:\/\/artifacts.elastic.co\/packages\/7.x\/yum\ngpgcheck=1\ngpgkey=https:\/\/artifacts.elastic.co\/GPG-KEY-elasticsearch\nenabled=1\nautorefresh=1\ntype=rpm-md\nEOL<\/code><\/pre>\n\n\n\n
dnf update<\/code><\/pre>\n\n\n\n
Install Elasticsearch on CentOS 8<\/a><\/h3>\n\n\n\n
dnf install elasticsearch<\/code><\/pre>\n\n\n\n
Configuring Elasticsearch<\/h4>\n\n\n\n
sed -i 's\/#network.host: 192.168.0.1\/network.host: 192.168.56.154\/' \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
discovery.type: single-node<\/code><\/strong>, under discovery configuration options. However, you can skip this if your ES is listening on a loopback interface.<\/p>\n\n\n\n
vim \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
# --------------------------------- Discovery ----------------------------------\n#\n# Pass an initial list of hosts to perform discovery when this node is started:\n# The default list of hosts is [\"127.0.0.1\", \"[::1]\"]\n#\n#discovery.seed_hosts: [\"host1\", \"host2\"]\n#\n# Bootstrap the cluster using an initial set of master-eligible nodes:\n#\n#cluster.initial_master_nodes: [\"node-1\", \"node-2\"]\n# Single Node Discovery\ndiscovery.type: single-node<\/strong><\/code><\/pre>\n\n\n\n
vim \/etc\/elasticsearch\/jvm.options<\/code><\/pre>\n\n\n\n
...\n################################################################\n\n# Xms represents the initial size of total heap space\n# Xmx represents the maximum size of total heap space\n\n-Xms512m\n-Xmx512m<\/strong>\n...<\/code><\/pre>\n\n\n\n
systemctl daemon-reload\nsystemctl enable --now elasticsearch<\/code><\/pre>\n\n\n\n
curl -XGET 192.168.56.154:9200<\/code><\/pre>\n\n\n\n
{\n \"name\" : \"centos8.kifarunix-demo.com\",\n \"cluster_name\" : \"elasticsearch\",\n \"cluster_uuid\" : \"rVPJG0k9TKK9-I-mVmoV_Q\",\n \"version\" : {\n \"number\" : \"7.9.1\",\n \"build_flavor\" : \"default\",\n \"build_type\" : \"rpm\",\n \"build_hash\" : \"083627f112ba94dffc1232e8b42b73492789ef91\",\n \"build_date\" : \"2020-09-01T21:22:21.964974Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"8.6.2\",\n \"minimum_wire_compatibility_version\" : \"6.8.0\",\n \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n },\n \"tagline\" : \"You Know, for Search\"\n}<\/code><\/pre>\n\n\n\n
Install Kibana on CentOS 8<\/a><\/h3>\n\n\n\n
yum install kibana<\/code><\/pre>\n\n\n\n
Configuring Kibana<\/h4>\n\n\n\n
vim \/etc\/kibana\/kibana.yml<\/code><\/pre>\n\n\n\n
...\n#server.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\n#server.host: \"localhost\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\n#elasticsearch.hosts: [\"http:\/\/localhost:9200\"]<\/strong><\/code><\/pre>\n\n\n\n
...\nserver.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\nserver.host: \"192.168.56.154\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\nelasticsearch.hosts: [\"http:\/\/192.168.56.154:9200\"]<\/strong><\/code><\/pre>\n\n\n\n
systemctl enable --now kibana<\/code><\/pre>\n\n\n\n
firewall-cmd --add-port=5601\/tcp --permanent<\/code><\/pre>\n\n\n\n
firewall-cmd --reload<\/code><\/pre>\n\n\n\n
Accessing Kibana Interface<\/h4>\n\n\n\n
http:\/\/kibana-server-hostname-OR-IP:5601<\/code>.<\/p>\n\n\n\n
Install and Configure Fluentd on CentOS 8<\/h3>\n\n\n\n
Prereqs of Installing Fluentd<\/h4>\n\n\n\n
\n
ulimit -n<\/code><\/pre>\n\n\n\n
1024<\/code><\/pre>\n\n\n\n
vim \/etc\/security\/limits.conf<\/code><\/pre>\n\n\n\n
root soft nofile 65536\nroot hard nofile 65536\n* soft nofile 65536\n* hard nofile 65536<\/code><\/pre>\n\n\n\n
\n
cat >> \/etc\/sysctl.conf << 'EOL'\nnet.core.somaxconn = 1024\nnet.core.netdev_max_backlog = 5000\nnet.core.rmem_max = 16777216\nnet.core.wmem_max = 16777216\nnet.ipv4.tcp_wmem = 4096 12582912 16777216\nnet.ipv4.tcp_rmem = 4096 12582912 16777216\nnet.ipv4.tcp_max_syn_backlog = 8096\nnet.ipv4.tcp_slow_start_after_idle = 0\nnet.ipv4.tcp_tw_reuse = 1\nnet.ipv4.ip_local_port_range = 10240 65535\nEOL<\/code><\/pre>\n\n\n\n
sysctl -p<\/code><\/pre>\n\n\n\n
Install Fluentd Aggregator on CentOS 8<\/a><\/h3>\n\n\n\n
td-agent<\/strong><\/code> (Treasure Agent), an RPM package that provides a stable distribution of Fluentd based data collector and is managed and maintained by Treasure Data, Inc<\/a>.<\/p>\n\n\n\n
td-agent<\/strong><\/code> package, run the command below to download and execute a script that will create the td-agent RPM repository and installs td-agent on CentOS 8.<\/p>\n\n\n\n
dnf install curl<\/code><\/pre>\n\n\n\n
curl -L https:\/\/toolbelt.treasuredata.com\/sh\/install-redhat-td-agent4.sh | sh<\/code><\/pre>\n\n\n\n
Running Fluentd td-agent on CentOS 8<\/h4>\n\n\n\n
systemctl enable --now td-agent<\/code><\/pre>\n\n\n\n
systemctl status td-agent<\/code><\/pre>\n\n\n\n
\u25cf td-agent.service - td-agent: Fluentd based data collector for Treasure Data\n Loaded: loaded (\/usr\/lib\/systemd\/system\/td-agent.service; enabled; vendor preset: disabled)\n Active: active (running) since Fri 2020-09-18 22:09:40 EAT; 29s ago\n Docs: https:\/\/docs.treasuredata.com\/articles\/td-agent\n Process: 2543 ExecStart=\/opt\/td-agent\/bin\/fluentd --log $TD_AGENT_LOG_FILE --daemon \/var\/run\/td-agent\/td-agent.pid $TD_AGENT_OPTIONS (code=exited, status=0\/SUCCESS)\n Main PID: 2549 (fluentd)\n Tasks: 9 (limit: 5027)\n Memory: 89.4M\n CGroup: \/system.slice\/td-agent.service\n \u251c\u25002549 \/opt\/td-agent\/bin\/ruby \/opt\/td-agent\/bin\/fluentd --log \/var\/log\/td-agent\/td-agent.log --daemon \/var\/run\/td-agent\/td-agent.pid\n \u2514\u25002552 \/opt\/td-agent\/bin\/ruby -Eascii-8bit:ascii-8bit \/opt\/td-agent\/bin\/fluentd --log \/var\/log\/td-agent\/td-agent.log --daemon \/var\/run\/td-agent\/td-agent.pid --u>\n\nSep 18 22:09:38 centos8.kifarunix-demo.com systemd[1]: Starting td-agent: Fluentd based data collector for Treasure Data...\nSep 18 22:09:40 centos8.kifarunix-demo.com systemd[1]: Started td-agent: Fluentd based data collector for Treasure Data.<\/code><\/pre>\n\n\n\n
Installing Fluentd Elasticsearch Plugin<\/h4>\n\n\n\n
td-agent-gem install fluent-plugin-elasticsearch<\/code><\/pre>\n\n\n\n
secure_forward<\/strong><\/code> Fluentd output plugin that sends data securely.<\/p>\n\n\n\n
td-agent-gem install fluent-plugin-secure-forward<\/code><\/pre>\n\n\n\n
Configuring Fluentd Aggregator on CentOS 8<\/a><\/h3>\n\n\n\n
\/etc\/td-agent\/td-agent.conf<\/strong><\/code>. The configuration file consists of the following directives:<\/p>\n\n\n\n
\n
source<\/code><\/strong> directives determine the input sources<\/li>\n\n\n\n
match<\/code><\/strong> directives determine the output destinations<\/li>\n\n\n\n
filter<\/code><\/strong> directives determine the event processing pipelines<\/li>\n\n\n\n
system<\/code><\/strong> directives set system wide configuration<\/li>\n\n\n\n
label<\/code><\/strong> directives group the output and filter for internal routing<\/li>\n\n\n\n
@include<\/code><\/strong> directives include other files<\/li>\n<\/ol>\n\n\n\n
Configure Fluentd Aggregator Input Plugins<\/a><\/h4>\n\n\n\n
forward<\/code> input plugin<\/a>.
forward<\/code> input plugin listens to a TCP socket to receive the event stream. It also listens to a UDP socket to receive heartbeat messages. The default port for Fluentd forward plugin is 24224.<\/p>\n\n\n\n
cp \/etc\/td-agent\/td-agent.conf{,.old}<\/code><\/pre>\n\n\n\n
vim \/etc\/td-agent\/td-agent.conf<\/code><\/pre>\n\n\n\n
...\n<source>\n @type forward\n port 24224\n bind 192.168.60.6\n<\/source>\n...<\/code><\/pre>\n\n\n\n
firewall-cmd --add-port=24224\/{tcp,udp} --permanent<\/pre>\n\n\n\n
firewall-cmd --reload<\/code><\/pre>\n\n\n\n
Configure Fluentd Aggregator Output Plugins<\/a><\/h4>\n\n\n\n
vim \/etc\/td-agent\/td-agent.conf<\/code><\/pre>\n\n\n\n
####\n## Output descriptions:\n##\n<match *.**>\n @type elasticsearch\n host 192.168.60.6\n port 9200\n logstash_format true\n logstash_prefix fluentd\n enable_ilm true\n index_date_pattern \"now\/m{yyyy.mm}\"\n flush_interval 10s\n<\/match>\n\n####\n## Source descriptions:\n##\n<source>\n @type forward\n port 24224\n bind 192.168.60.6\n<\/source><\/code><\/pre>\n\n\n\n
systemctl restart td-agent<\/code><\/pre>\n\n\n\n
Install Fluentd Forwarder on Remote Nodes<\/a><\/h2>\n\n\n\n
curl -L https:\/\/toolbelt.treasuredata.com\/sh\/install-redhat-td-agent4.sh | sh<\/code><\/pre>\n\n\n\n
curl -L https:\/\/toolbelt.treasuredata.com\/sh\/install-ubuntu-focal-td-agent4.sh | sh<\/code><\/pre>\n\n\n\n
curl -L https:\/\/toolbelt.treasuredata.com\/sh\/install-ubuntu-bionic-td-agent4.sh | sh<\/code><\/pre>\n\n\n\n
Configure Fluentd Forwarder to Ship Logs to Fluentd Aggregator<\/h3>\n\n\n\n
cp \/etc\/td-agent\/td-agent.conf{,.old}<\/code><\/pre>\n\n\n\n
Configure Fluentd Forwarder Input and Output<\/h4>\n\n\n\n
\/var\/log\/secure<\/strong><\/code>, from a remove CentOS 8 system.<\/p>\n\n\n\n
vim \/etc\/td-agent\/td-agent.conf<\/code><\/pre>\n\n\n\n
<source>\n @type tail\n path \/var\/log\/secure\n pos_file \/var\/log\/td-agent\/secure.pos\n tag ssh.auth\n <parse>\n @type syslog\n <\/parse>\n<\/source><\/code><\/pre>\n\n\n\n
<match pattern>\n @type forward\n send_timeout 60s\n recover_wait 10s\n hard_timeout 60s\n\n <server>\n name log_mgr\n host 192.168.60.6\n port 24224\n weight 60\n <\/server>\n<\/match><\/code><\/pre>\n\n\n\n
####\n## Output descriptions:\n##\n<match *.**>\n @type forward\n send_timeout 60s\n recover_wait 10s\n hard_timeout 60s\n\n <server>\n name log_mgr\n host 192.168.60.6\n port 24224\n weight 60\n <\/server>\n<\/match>\n####\n## Source descriptions:\n##\n<source>\n @type tail\n path \/var\/log\/secure\n pos_file \/var\/log\/td-agent\/secure.pos\n tag ssh.auth\n <parse>\n @type syslog\n <\/parse>\n<\/source><\/code><\/pre>\n\n\n\n
ls -alh \/var\/log\/secure<\/code><\/pre>\n\n\n\n
-rw-------. 1 root root 14K Sep 19 00:33 \/var\/log\/secure<\/code><\/pre>\n\n\n\n
chmod og+r \/var\/log\/secure<\/code><\/pre>\n\n\n\n
ll \/var\/log\/secure<\/code><\/pre>\n\n\n\n
-rw-r--r--. 1 root root 13708 Sep 19 00:33 \/var\/log\/secure<\/code><\/pre>\n\n\n\n
systemctl enable --now td-agent<\/code><\/pre>\n\n\n\n
systemctl status td-agent<\/code><\/pre>\n\n\n\n
\u25cf td-agent.service - td-agent: Fluentd based data collector for Treasure Data\n Loaded: loaded (\/usr\/lib\/systemd\/system\/td-agent.service; enabled; vendor preset: disabled)\n Active: active (running) since Sat 2020-09-19 01:23:40 EAT; 29s ago\n Docs: https:\/\/docs.treasuredata.com\/articles\/td-agent\n Process: 3163 ExecStart=\/opt\/td-agent\/bin\/fluentd --log $TD_AGENT_LOG_FILE --daemon \/var\/run\/td-agent\/td-agent.pid $TD_AGENT_OPTIONS (code=exited, status=0\/SUCCESS)\n Main PID: 3169 (fluentd)\n Tasks: 8 (limit: 11476)\n Memory: 71.0M\n CGroup: \/system.slice\/td-agent.service\n \u251c\u25003169 \/opt\/td-agent\/bin\/ruby \/opt\/td-agent\/bin\/fluentd --log \/var\/log\/td-agent\/td-agent.log --daemon \/var\/run\/td-agent\/td-agent.pid\n \u2514\u25003172 \/opt\/td-agent\/bin\/ruby -Eascii-8bit:ascii-8bit \/opt\/td-agent\/bin\/fluentd --log \/var\/log\/td-agent\/td-agent.log --daemon \/var\/run\/td-agent\/td-agent.pid --u>\n\nSep 19 01:23:39 localrepo.kifarunix-demo.com systemd[1]: Starting td-agent: Fluentd based data collector for Treasure Data...\nSep 19 01:23:40 localrepo.kifarunix-demo.com systemd[1]: Started td-agent: Fluentd based data collector for Treasure Data.<\/code><\/pre>\n\n\n\n
tail -f \/var\/log\/td-agent\/td-agent.log<\/code><\/pre>\n\n\n\n
<\/source>\n<\/ROOT>\n2020-09-19 01:23:40 +0300 [info]: starting fluentd-1.11.2 pid=3163 ruby=\"2.7.1\"\n2020-09-19 01:23:40 +0300 [info]: spawn command to main: cmdline=[\"\/opt\/td-agent\/bin\/ruby\", \"-Eascii-8bit:ascii-8bit\", \"\/opt\/td-agent\/bin\/fluentd\", \"--log\", \"\/var\/log\/td-agent\/td-agent.log\", \"--daemon\", \"\/var\/run\/td-agent\/td-agent.pid\", \"--under-supervisor\"]\n2020-09-19 01:23:41 +0300 [info]: adding match pattern=\"pattern\" type=\"forward\"\n2020-09-19 01:23:41 +0300 [info]: #0 adding forwarding server 'log_mgr' host=\"192.168.60.6\" port=24224 weight=60 plugin_id=\"object:71c\"\n2020-09-19 01:23:41 +0300 [info]: adding source type=\"tail\"\n2020-09-19 01:23:41 +0300 [info]: #0 starting fluentd worker pid=3172 ppid=3169 worker=0\n2020-09-19 01:23:41 +0300 [info]: #0 following tail of \/var\/log\/secure\n2020-09-19 01:23:41 +0300 [info]: #0 fluentd worker is now running worker=0\n...<\/code><\/pre>\n\n\n\n
tcpdump -i enp0s8 -nn dst port 24224<\/code><\/pre>\n\n\n\n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes\n01:28:37.183634 IP 192.168.60.5.39452 > 192.168.60.6.24224: Flags [S], seq 2062965426, win 29200, options [mss 1460,sackOK,TS val 3228636873 ecr 0,nop,wscale 7], length 0\n01:28:37.184740 IP 192.168.60.5.39452 > 192.168.60.6.24224: Flags [.], ack 2675674893, win 229, options [nop,nop,TS val 3228636875 ecr 354613533], length 0\n01:28:37.185145 IP 192.168.60.5.39452 > 192.168.60.6.24224: Flags [F.], seq 0, ack 1, win 229, options [nop,nop,TS val 3228636875 ecr 354613533], length 0\n01:28:38.181546 IP 192.168.60.5.39454 > 192.168.60.6.24224: Flags [S], seq 1970844825, win 29200, options [mss 1460,sackOK,TS val 3228637794 ecr 0,nop,wscale 7], length 0\n01:28:38.182649 IP 192.168.60.5.39454 > 192.168.60.6.24224: Flags [.], ack 2454001874, win 229, options [nop,nop,TS val 3228637796 ecr 354614454], length 0\n...<\/code><\/pre>\n\n\n\n
Check Available Indices on Elasticsearch<\/h4>\n\n\n\n
logstash_prefix fluentd<\/strong><\/code>.<\/p>\n\n\n\n
curl -XGET http:\/\/192.168.60.6:9200\/_cat\/indices?v<\/code><\/pre>\n\n\n\n
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size\ngreen open .apm-custom-link kuDD9tq0RAapIEtF4k79zw 1 0 0 0 208b 208b\ngreen open .kibana-event-log-7.9.1-000001 gJ6tr6p5TCWmu1GhUNaD4A 1 0 9 0 48.4kb 48.4kb\ngreen open .kibana_task_manager_1 T-dC9DFNTsy2uoYAJmvDtg 1 0 6 20 167.1kb 167.1kb\ngreen open .apm-agent-configuration lNCadKowT3eIg_heAruB-w 1 0 0 0 208b 208b\nyellow open fluentd-2020.09.19 nWU0KLe2Rv-T5eMD53kcoA 1 1 30 0 36.2kb 36.2kb\n<\/strong>green open .async-search C1gXukCuQIe5grCFpLwxaQ 1 0 0 0 231b 231b\ngreen open .kibana_1 Mw6PD83xT1KksRqAvO1BKg 1 0 22 5 10.4mb 10.4mb<\/code><\/pre>\n\n\n\n
Create Fluentd Kibana Index<\/h4>\n\n\n\n
http:\/\/server-IP-or-hostname:5601<\/strong><\/code>, and create the index.<\/p>\n\n\n\n
<\/figure>\n\n\n\n
Viewing Fluentd Data on Kibana<\/a><\/h4>\n\n\n\n
<\/figure>\n\n\n\n
Further Reading<\/h3>\n\n\n\n