DEV Community

Purushotam Adhikari
Purushotam Adhikari

Posted on

Setting Up a Centralized Logging Server with ELK Stack

Managing logs across multiple servers and applications can quickly become unwieldy. Enter the ELK Stack—a powerful trio of open-source tools (Elasticsearch, Logstash, and Kibana) that creates a robust centralized logging solution.

In this guide, I'll walk you through setting up your own centralized logging server using the ELK Stack, from installation to configuration.

What is the ELK Stack?

The ELK Stack consists of:

  • Elasticsearch: A distributed search and analytics engine
  • Logstash: A data processing pipeline that ingests, transforms, and forwards data
  • Kibana: A visualization platform for exploring and creating dashboards

Prerequisites

  • A server with at least 4GB RAM (8GB recommended)
  • Ubuntu 22.04 or similar Linux distribution
  • Root or sudo access
  • Basic understanding of Linux commands
  • Java 11 or newer installed

Step 1: Install Elasticsearch

Let's start by installing Elasticsearch, the backbone of our logging system:

# Import the Elasticsearch GPG key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

# Add the Elasticsearch repository
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

# Update package lists and install Elasticsearch
sudo apt update && sudo apt install elasticsearch
Enter fullscreen mode Exit fullscreen mode

Once installed, we need to configure Elasticsearch:

sudo nano /etc/elasticsearch/elasticsearch.yml
Enter fullscreen mode Exit fullscreen mode

Make the following changes:

# Set the node name
node.name: elk-central

# Only listen on localhost (for production, you'd configure security)
network.host: localhost
http.port: 9200

# Cluster settings
cluster.name: logging-cluster
discovery.type: single-node
Enter fullscreen mode Exit fullscreen mode

Now start and enable Elasticsearch:

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Enter fullscreen mode Exit fullscreen mode

Verify it's running:

curl -X GET "localhost:9200"
Enter fullscreen mode Exit fullscreen mode

Step 2: Install Kibana

Next, let's install Kibana, our visualization platform:

sudo apt install kibana
Enter fullscreen mode Exit fullscreen mode

Configure Kibana:

sudo nano /etc/kibana/kibana.yml
Enter fullscreen mode Exit fullscreen mode

Add the following settings:

server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
Enter fullscreen mode Exit fullscreen mode

Start and enable Kibana:

sudo systemctl enable kibana
sudo systemctl start kibana
Enter fullscreen mode Exit fullscreen mode

Step 3: Install Logstash

Now for Logstash, our data processing pipeline:

sudo apt install logstash
Enter fullscreen mode Exit fullscreen mode

Let's create a basic Logstash configuration:

sudo nano /etc/logstash/conf.d/01-input-beats.conf
Enter fullscreen mode Exit fullscreen mode

Add the following to accept Filebeat inputs:

input {
  beats {
    port => 5044
  }
}
Enter fullscreen mode Exit fullscreen mode

Next, create a filter configuration:

sudo nano /etc/logstash/conf.d/30-filter.conf
Enter fullscreen mode Exit fullscreen mode

Add basic filtering:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Finally, create an output configuration:

sudo nano /etc/logstash/conf.d/50-output-elasticsearch.conf
Enter fullscreen mode Exit fullscreen mode

Add Elasticsearch as the output:

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}
Enter fullscreen mode Exit fullscreen mode

Start and enable Logstash:

sudo systemctl enable logstash
sudo systemctl start logstash
Enter fullscreen mode Exit fullscreen mode

Step 4: Set Up Filebeat on Client Servers

To send logs from your client servers to your centralized ELK server, you'll need to install Filebeat on each client:

# On each client server
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update && sudo apt install filebeat
Enter fullscreen mode Exit fullscreen mode

Configure Filebeat:

sudo nano /etc/filebeat/filebeat.yml
Enter fullscreen mode Exit fullscreen mode

Update with the following (replacing ELK_SERVER_IP with your centralized server's IP):

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/syslog
  fields:
    type: syslog

output.logstash:
  hosts: ["ELK_SERVER_IP:5044"]
Enter fullscreen mode Exit fullscreen mode

Start and enable Filebeat:

sudo systemctl enable filebeat
sudo systemctl start filebeat
Enter fullscreen mode Exit fullscreen mode

Step 5: Set Up Nginx as a Reverse Proxy (Optional but Recommended)

For better security and to expose Kibana to the outside world, set up Nginx:

sudo apt install nginx
Enter fullscreen mode Exit fullscreen mode

Create an Nginx configuration:

sudo nano /etc/nginx/sites-available/kibana
Enter fullscreen mode Exit fullscreen mode

Add the following:

server {
    listen 80;
    server_name elk.yourdomain.com;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
Enter fullscreen mode Exit fullscreen mode

Enable the site and restart Nginx:

sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
Enter fullscreen mode Exit fullscreen mode

Step 6: Explore Your Logs in Kibana

  1. Open your browser and navigate to http://elk.yourdomain.com or http://your_server_ip:5601
  2. In Kibana, go to "Management" > "Stack Management" > "Index Patterns"
  3. Create an index pattern (e.g., filebeat-*)
  4. Go to "Discover" to start exploring your logs

Conclusion

You now have a functioning centralized logging server with the ELK Stack! This setup allows you to collect, process, and visualize logs from all your servers in one place.

As you grow more comfortable with the ELK Stack, consider enhancing your setup with:

  • Security features (X-Pack)
  • More advanced Logstash filters
  • Custom Kibana dashboards
  • Adding Beats like Metricbeat for system metrics
  • Implementing log rotation and retention policies

The ability to quickly search and analyze logs across your entire infrastructure will significantly improve your troubleshooting capabilities and provide valuable insights into your systems.

Happy logging!

Top comments (0)