Managing logs across multiple servers and applications can quickly become unwieldy. Enter the ELK Stack—a powerful trio of open-source tools (Elasticsearch, Logstash, and Kibana) that creates a robust centralized logging solution.
In this guide, I'll walk you through setting up your own centralized logging server using the ELK Stack, from installation to configuration.
What is the ELK Stack?
The ELK Stack consists of:
- Elasticsearch: A distributed search and analytics engine
- Logstash: A data processing pipeline that ingests, transforms, and forwards data
- Kibana: A visualization platform for exploring and creating dashboards
Prerequisites
- A server with at least 4GB RAM (8GB recommended)
- Ubuntu 22.04 or similar Linux distribution
- Root or sudo access
- Basic understanding of Linux commands
- Java 11 or newer installed
Step 1: Install Elasticsearch
Let's start by installing Elasticsearch, the backbone of our logging system:
# Import the Elasticsearch GPG key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
# Add the Elasticsearch repository
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
# Update package lists and install Elasticsearch
sudo apt update && sudo apt install elasticsearch
Once installed, we need to configure Elasticsearch:
sudo nano /etc/elasticsearch/elasticsearch.yml
Make the following changes:
# Set the node name
node.name: elk-central
# Only listen on localhost (for production, you'd configure security)
network.host: localhost
http.port: 9200
# Cluster settings
cluster.name: logging-cluster
discovery.type: single-node
Now start and enable Elasticsearch:
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Verify it's running:
curl -X GET "localhost:9200"
Step 2: Install Kibana
Next, let's install Kibana, our visualization platform:
sudo apt install kibana
Configure Kibana:
sudo nano /etc/kibana/kibana.yml
Add the following settings:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
Start and enable Kibana:
sudo systemctl enable kibana
sudo systemctl start kibana
Step 3: Install Logstash
Now for Logstash, our data processing pipeline:
sudo apt install logstash
Let's create a basic Logstash configuration:
sudo nano /etc/logstash/conf.d/01-input-beats.conf
Add the following to accept Filebeat inputs:
input {
beats {
port => 5044
}
}
Next, create a filter configuration:
sudo nano /etc/logstash/conf.d/30-filter.conf
Add basic filtering:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Finally, create an output configuration:
sudo nano /etc/logstash/conf.d/50-output-elasticsearch.conf
Add Elasticsearch as the output:
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
Start and enable Logstash:
sudo systemctl enable logstash
sudo systemctl start logstash
Step 4: Set Up Filebeat on Client Servers
To send logs from your client servers to your centralized ELK server, you'll need to install Filebeat on each client:
# On each client server
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update && sudo apt install filebeat
Configure Filebeat:
sudo nano /etc/filebeat/filebeat.yml
Update with the following (replacing ELK_SERVER_IP with your centralized server's IP):
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/syslog
fields:
type: syslog
output.logstash:
hosts: ["ELK_SERVER_IP:5044"]
Start and enable Filebeat:
sudo systemctl enable filebeat
sudo systemctl start filebeat
Step 5: Set Up Nginx as a Reverse Proxy (Optional but Recommended)
For better security and to expose Kibana to the outside world, set up Nginx:
sudo apt install nginx
Create an Nginx configuration:
sudo nano /etc/nginx/sites-available/kibana
Add the following:
server {
listen 80;
server_name elk.yourdomain.com;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Enable the site and restart Nginx:
sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
Step 6: Explore Your Logs in Kibana
- Open your browser and navigate to
http://elk.yourdomain.com
orhttp://your_server_ip:5601
- In Kibana, go to "Management" > "Stack Management" > "Index Patterns"
- Create an index pattern (e.g.,
filebeat-*
) - Go to "Discover" to start exploring your logs
Conclusion
You now have a functioning centralized logging server with the ELK Stack! This setup allows you to collect, process, and visualize logs from all your servers in one place.
As you grow more comfortable with the ELK Stack, consider enhancing your setup with:
- Security features (X-Pack)
- More advanced Logstash filters
- Custom Kibana dashboards
- Adding Beats like Metricbeat for system metrics
- Implementing log rotation and retention policies
The ability to quickly search and analyze logs across your entire infrastructure will significantly improve your troubleshooting capabilities and provide valuable insights into your systems.
Happy logging!
Top comments (0)