{"id":4982,"date":"2020-02-09T16:01:32","date_gmt":"2020-02-09T13:01:32","guid":{"rendered":"https:\/\/kifarunix.com\/?p=4982"},"modified":"2024-03-14T19:28:29","modified_gmt":"2024-03-14T16:28:29","slug":"installing-elk-stack-on-centos-8","status":"publish","type":"post","link":"https:\/\/kifarunix.com\/installing-elk-stack-on-centos-8\/","title":{"rendered":"Installing ELK Stack on CentOS 8"},"content":{"rendered":"\n

Welcome to our guide on installing ELK Stack<\/a> on CentOS 8.<\/p>\n\n\n\n

ELK is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana<\/a>. Elasticsearch<\/strong> is a search and analytics engine. Logstash<\/strong> is a server\u2011side data processing pipeline that ingests data, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana<\/strong> lets users visualize data with charts and graphs in Elasticsearch. Beast, the data shippers is also part of the stack.<\/p>\n\n\n\n

Install ELK Stack on CentOS 8<\/h2>\n\n\n\n

The order of installation of the Elastic Stack components is of great importance. It usually takes the order Elasticsearch > Kibana > Logstash > Beats<\/strong>. Also note that all components should be of the same versions.<\/p>\n\n\n\n

To install Elastic Stack components on CentOS 8 system, you can choose to create the Elastic RPM repo or install each component using their respective RPM binary.<\/p>\n\n\n\n

Creating Elastic Stack RPM Repo on CentOS 8<\/h3>\n\n\n\n

Elastic Stack version 7.x repo can be created by running the command below.<\/p>\n\n\n\n

\ncat > \/etc\/yum.repos.d\/elasticstack.repo << EOL\n[elasticstack]\nname=Elastic repository for 7.x packages\nbaseurl=https:\/\/artifacts.elastic.co\/packages\/7.x\/yum\ngpgcheck=1\ngpgkey=https:\/\/artifacts.elastic.co\/GPG-KEY-elasticsearch\nenabled=1\nautorefresh=1\ntype=rpm-md\nEOL\n<\/code><\/pre>\n\n\n\n

Run system package update.<\/p>\n\n\n\n

dnf update<\/code><\/pre>\n\n\n\n

Install Elasticsearch on CentOS 8<\/h3>\n\n\n\n

You can install Elasticsearch on CentOS 8 from the created Elastic RPM repos or simply from using the RPM binary.<\/p>\n\n\n\n

dnf install elasticsearch<\/code><\/pre>\n\n\n\n

Alternatively, you can install ES using RPM binary, version 7.5.2 is the latest stable as of this writing.<\/p>\n\n\n\n

VERSION=7.5.2\ndnf install https:\/\/artifacts.elastic.co\/downloads\/elasticsearch\/elasticsearch-$VERSION-x86_64.rpm<\/code><\/pre>\n\n\n\n

Configuring Elasticsearch<\/h3>\n\n\n\n

Out of the box, Elasticsearch works well with the default configuration options. In this demo, we are going to however make a few changes as per Important Elasticsearch Configurations<\/a>.<\/p>\n\n\n\n

Set the Elasticsearch bind address to a specific IP if you need to enable remote access either from Kibana or Logstash or from Beats. Replace the IP, 192.168.56.154, with your appropriate server IP address<\/strong>.<\/p>\n\n\n\n

sed -i 's\/#network.host: 192.168.0.1\/network.host: 192.168.56.154\/' \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n

You can as well leave the default settings to only allow local access to Elasticsearch.<\/p>\n\n\n\n

When configured to listen on a non-loopback interface, Elasticsearch expects to join a cluster<\/a>. But since we are setting up a single node Elastic Stack, you need to define in the ES configuration that this is a single node setup, by entering the line, discovery.type: single-node<\/code><\/strong>, under discovery configuration options. However, you can skip this if your ES is listening on a loopback interface.<\/p>\n\n\n\n

vim \/etc\/elasticsearch\/elasticsearch.yml<\/code><\/pre>\n\n\n\n
# --------------------------------- Discovery ----------------------------------\n#\n# Pass an initial list of hosts to perform discovery when this node is started:\n# The default list of hosts is [\"127.0.0.1\", \"[::1]\"]\n#\n#discovery.seed_hosts: [\"host1\", \"host2\"]\n#\n# Bootstrap the cluster using an initial set of master-eligible nodes:\n#\n#cluster.initial_master_nodes: [\"node-1\", \"node-2\"]\n# Single Node Discovery\ndiscovery.type: single-node<\/strong><\/code><\/pre>\n\n\n\n

Next, configure JVM heap size to no more than half the size of your memory. In this case, our test server has 2G RAM and the heap size is set to 512M for both maximum and minimum sizes.<\/p>\n\n\n\n

vim \/etc\/elasticsearch\/jvm.options<\/code><\/pre>\n\n\n\n
...\n################################################################\n\n# Xms represents the initial size of total heap space\n# Xmx represents the maximum size of total heap space\n\n-Xms512m\n-Xmx512m<\/strong>\n...<\/code><\/pre>\n\n\n\n

Start and enable ES to run on system boot.<\/p>\n\n\n\n

systemctl daemon-reload\nsystemctl enable --now elasticsearch<\/code><\/pre>\n\n\n\n

Verify that Elasticsearch is running as expected.<\/p>\n\n\n\n

curl -XGET 192.168.56.154:9200<\/code><\/pre>\n\n\n\n
\n{\n  \"name\" : \"elastic.kifarunix-demo.com\",\n  \"cluster_name\" : \"elasticsearch\",\n  \"cluster_uuid\" : \"iyslQrEdTISVdVGsDNDvlA\",\n  \"version\" : {\n    \"number\" : \"7.5.2\",\n    \"build_flavor\" : \"default\",\n    \"build_type\" : \"rpm\",\n    \"build_hash\" : \"8bec50e1e0ad29dad5653712cf3bb580cd1afcdf\",\n    \"build_date\" : \"2020-01-15T12:11:52.313576Z\",\n    \"build_snapshot\" : false,\n    \"lucene_version\" : \"8.3.0\",\n    \"minimum_wire_compatibility_version\" : \"6.8.0\",\n    \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n  },\n  \"tagline\" : \"You Know, for Search\"\n}\n<\/code><\/pre>\n\n\n\n

Install Kibana on CentOS 8<\/h3>\n\n\n\n

The next Elastic Stack component to install is Kabana. Since we already created the Elastic Stack repos, you can simply run the command below to install it.<\/p>\n\n\n\n

yum install kibana<\/code><\/pre>\n\n\n\n

Configuring Kibana<\/h3>\n\n\n\n

To begin with, you need to configure Kibana to allow remote access. It usually only allows local access on port 5601\/tcp. Hence, open the Kibana configuration file for editing and uncomment and change the following lines;<\/p>\n\n\n\n

vim \/etc\/kibana\/kibana.yml<\/code><\/pre>\n\n\n\n
...\n#server.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\n#server.host: \"localhost\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\n#elasticsearch.hosts: [\"http:\/\/localhost:9200\"]<\/strong><\/code><\/pre>\n\n\n\n

Such that it look like as shown below:<\/p>\n\n\n\n

Replace the IP addresses of Kibana and Elasticsearch accordingly. Note that in this demo, All Elastic Stack components are running on the same host.<\/strong><\/p>\n\n\n\n

...\nserver.port: 5601<\/strong>\n...\n# To allow connections from remote users, set this parameter to a non-loopback address.\nserver.host: \"192.168.56.154\"<\/strong>\n...\n# The URLs of the Elasticsearch instances to use for all your queries.\nelasticsearch.hosts: [\"http:\/\/192.168.56.154:9200\"]<\/strong><\/code><\/pre>\n\n\n\n

Start and enable Kibana to run on system boot.<\/p>\n\n\n\n

systemctl enable --now kibana<\/code><\/pre>\n\n\n\n

Open Kibana Port on FirewallD, if it is running;<\/p>\n\n\n\n

firewall-cmd --add-port=5601\/tcp --permanent<\/code><\/pre>\n\n\n\n
firewall-cmd --reload<\/code><\/pre>\n\n\n\n

Accessing Kibana Interface<\/h3>\n\n\n\n

You can now access Kibana from your browser by using the URL, http:\/\/kibana-server-hostname-OR-IP:5601<\/code>.<\/p>\n\n\n\n

On Kibana web interface, you can choose to try sample data since we do not have any data being sent to Elasticsearch yet. You can as well choose to explore your own data, of course after sending data to Es.<\/p>\n\n\n\n

Install Logstash on CentOS 8<\/h3>\n\n\n\n

Logstash is the component of Elastic Stack that does further processing of the event data before sending it to the Elasticsearch data store. For example, you can develop custom regex, grok patterns to extract specific fields from the event data.<\/p>\n\n\n\n

It is also possible to directly sent the data to Elasticsearch instead of passing them through Logstash.<\/p>\n\n\n\n

To install Logstash on CentOS 8.<\/p>\n\n\n\n

yum install logstash<\/code><\/pre>\n\n\n\n

Mostly, Logstash installs with a bundled Java\/JDK. But if for some reasons Logstash requires that Java be installed, then you can install Java 8 using the command below, if it is supported.<\/p>\n\n\n\n

yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel<\/code><\/pre>\n\n\n\n

Testing Logstash<\/h3>\n\n\n\n

Once the installation of Logstash is done, you can verify that it is ready to process event data by running the basic pipeline command as shown below;<\/p>\n\n\n\n

cd \/usr\/share\/logstash\/bin\/logstash -e 'input { stdin { } } output { stdout {} }'<\/code><\/pre>\n\n\n\n

Press ENTER to execute the command and wait for the Pipeline to be ready to receive input data.<\/p>\n\n\n\n

...\n[INFO ] 2020-02-09 08:37:38.732 [[main]-pipeline-manager] javapipeline - Pipeline started {\"pipeline.id\"=>\"main\"}\n[INFO ] 2020-02-09 08:37:38.818 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}\nThe stdin plugin is now waiting for input:\n[INFO ] 2020-02-09 08:37:39.395 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}<\/code><\/pre>\n\n\n\n

Type any string, for example, testing Logstash pipeline<\/strong> and press ENTER.<\/p>\n\n\n\n

Logstash process the input data and adds timestamp and host address information to the message.<\/p>\n\n\n\n

{\n          \"host\" => \"elastic.kifarunix-demo.com\",\n       \"message\" => \"testing Logstash pipeline\",\n    \"@timestamp\" => 2020-02-09T05:42:17.666Z,\n      \"@version\" => \"1\"\n}<\/code><\/pre>\n\n\n\n

You can stop Logstash pipeline by pressing Ctrl+D.<\/p>\n\n\n\n

Configuring Logstash to Collect and Sent Events to Elasticsearch<\/h3>\n\n\n\n

Logstash is now ready to receive and process data. In this demo, we are going to learn how to configure Logstash pipeline to collect events from a local system.<\/p>\n\n\n\n

Logstash pipeline is made up of three sections;<\/p>\n\n\n\n