Running the ELK Stack on CentOS 7 and using Beats

Page Header

Collating syslogs in an enterprise environment is incredibly useful. You can get a great overview of all of the activity across your services, easily perform audits and quickly find faults.

I played with Splunk a while ago, and whilst amazingly easy to deploy and configure, it is still paid software after a point. To go down the free path instead, one of the best alternatives is the ELK stack (Elasticsearch, Logstash, Kibana).

It took me a little while to get a fully functioning system going. It’s easy to install the service, but it does take a little bit of time to work out how to get data flowing into it. Thankfully in the latest version of ELK some additional products, called beats, are provided for cross-platform data collection. And the product documentation has been improving quite a lot.

My favourite beat is topbeat, which gathers performance metrics from your machine to publish to ELK, so there’s a bit of crossover with other monitoring tools (like Nagios) but you get awesome graphs showing system usage over time. It’s also fascinating how much packetbeat can see of all the web and DB traffic on hosts. It’s seriously addicting just trying to see how much data you can get pushed into ELK from your network, even if you’re not going to probably use all of it.

ELK Build

So here’s how I installed it on CentOS 7:

Update first, as always

1
2
# Update the Server
yum update -y

Install Elasticsearch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Install prerequisites
yum install java-openjdk -y

# Install Elasticsearch
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

cat <<EOF > /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF

yum install elasticsearch -y

systemctl enable elasticsearch
systemctl start elasticsearch

Install Kibana

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Install Kibana
cat <<EOF > /etc/yum.repos.d/kibana.repo
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF

yum install kibana -y

systemctl enable kibana
systemctl start kibana

Install Logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Install Logstash
cat <<EOF > /etc/yum.repos.d/logstash.repo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
EOF

yum install logstash -y

systemctl enable logstash
systemctl start logstash

Add TLS Support

1
2
3
4
5
6
# TLS Enable
openssl req -new -newkey rsa:2048 -nodes -out /etc/pki/tls/certs/logstash-forwarder.csr -keyout /etc/pki/tls/private/logstash-forwarder.key -subj '/CN=*.example.com/'
cat /etc/pki/tls/certs/logstash-forwarder.csr

# take the output of the above statement to your local Certificate Authority for signing, then put it into the following file:
vi /etc/pki/tls/certs/logstash-forwarder.crt

Install Nginx

Nginx will be used to present the Kibana site on standard HTTP ports.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Install Nginx
yum install epel-release -y && yum install nginx -y

cat <<EOF > /etc/nginx/conf.d/kibana.conf
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/pki/tls/certs/logstash-forwarder.crt;
ssl_certificate_key /etc/pki/tls/private/logstash-forwarder.key;

server_name logstash.example.com;

location / {
proxy_pass http://localhost:5601;
}
}
EOF

systemctl enable nginx
systemctl start nginx

Configuring Logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# Configure Logstash
cat <<EOF > /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
EOF

cat <<EOF > /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
EOF

cat <<EOF > /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
EOF

# run a config test to make sure it's correct
service logstash configtest

Install Example Dashboards

The Elastic.co team kindly provide some example dashboards to get your system up and running faster

1
2
3
4
5
6
# Install Dashboards
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip
yum install unzip -y
unzip beats-dashboards-*.zip
cd beats-dashboards-*
./load.sh

Install Index Templates

These are important for making sure the logs coming in are handled correctly.

1
2
3
4
# Install Index Templates
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@/etc/filebeat/filebeat.template.json
curl -XPUT 'http://localhost:9200/_template/topbeat?pretty' -d@/etc/topbeat/topbeat.template.json
curl -XPUT 'http://localhost:9200/_template/packetbeat?pretty' -d@/etc/packetbeat/packetbeat.template.json

In addition, if you’re bringing in windows logs you can load the winlogbeat.template.json from the Winlogbeat installer in the same way as above, you just need to use SCP or another method to get the template file onto the server.

1
2
# Install Index Templates
curl -XPUT 'http://localhost:9200/_template/winlogbeat?pretty' -d@/tmp/winlogbeat.template.json

Rebranding

If you’re into rebranding of the product, you can swap out the logo and favicon with your own just by doing the following (after uploading the files to /tmp):

1
2
cp /tmp/kibana.svg /opt/kibana/optimize/bundles/src/ui/public/images/kibana.svg
cp /tmp/favicon.ico /opt/kibana/optimize/bundles/src/ui/public/images/elk.ico

ELK Clients

CentOS 7 Client Machines

I performed the following on all of my CentOS 7 clients to start pushing logs to my Logstash service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# Elevate
sudo su

# Add the repository
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
cat <<EOF > /etc/yum.repos.d/elastic-beats.repo
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/\$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
EOF

# Install the packages
yum install filebeat topbeat packetbeat -y

# Add the ca certificate that signed the logstash certificate
cat <<EOF > /etc/pki/tls/certs/logstash-forwarder.crt
-----BEGIN CERTIFICATE-----
<certificate data>
-----END CERTIFICATE-----
EOF

# Configure Filebeat
cat <<EOF > /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/*/*.log
- /var/log/*/*/*.log
input_type: log
registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: filebeat
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Topbeat
cat <<EOF > /etc/topbeat/topbeat.yml
input:
period: 10
procs: [".*"]
stats:
output:
logstash:
hosts: ["logstash.example.com:5044"]
index: topbeat
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Packetbeat
cat <<EOF > /etc/packetbeat/packetbeat.yml
interfaces:
device: any
protocols:
dns:
ports: [53]
include_authorities: true
include_additionals: true
http:
ports: [80, 8080, 8000, 5000, 8002]
hide_keywords: ['pass', 'password', 'passwd']
memcache:
ports: [11211]
mysql:
ports: [3306]
pgsql:
ports: [5432]
redis:
ports: [6379]
thrift:
ports: [9090]
mongodb:
ports: [27017]

procs:
enabled: true
monitored:
- process: pgsql
cmdline_grep: postgres
- process: nginx
cmdline_grep: nginx

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: packetbeat
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Enable and Start the services
systemctl enable topbeat
systemctl enable filebeat
systemctl enable packetbeat
systemctl start topbeat
systemctl start filebeat
systemctl start packetbeat

CentOS 6 Client Machines

And the same for CentOS 6, just without systemctl:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# Elevate
sudo su

# Add the repository
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
cat <<EOF > /etc/yum.repos.d/elastic-beats.repo
[beats]
name=Elastic Beats Repository
baseurl=https://packages.elastic.co/beats/yum/el/\$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1
EOF

# Install the packages
yum install filebeat topbeat packetbeat -y

# Add the ca certificate that signed the logstash certificate
cat <<EOF > /etc/pki/tls/certs/logstash-forwarder.crt
-----BEGIN CERTIFICATE-----
<certificate data>
-----END CERTIFICATE-----
EOF

# Configure Filebeat
cat <<EOF > /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/*/*.log
- /var/log/*/*/*.log
input_type: log
registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: filebeat
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Topbeat
cat <<EOF > /etc/topbeat/topbeat.yml
input:
period: 10
procs: [".*"]
stats:
output:
logstash:
hosts: ["logstash.example.com:5044"]
index: topbeat
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Packetbeat
cat <<EOF > /etc/packetbeat/packetbeat.yml
interfaces:
device: any
protocols:
dns:
ports: [53]
include_authorities: true
include_additionals: true
http:
ports: [80, 8080, 8000, 5000, 8002]
hide_keywords: ['pass', 'password', 'passwd']
memcache:
ports: [11211]
mysql:
ports: [3306]
pgsql:
ports: [5432]
redis:
ports: [6379]
thrift:
ports: [9090]
mongodb:
ports: [27017]

procs:
enabled: true
monitored:
- process: pgsql
cmdline_grep: postgres
- process: nginx
cmdline_grep: nginx

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: packetbeat
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Enable and Start the services
chkconfig filebeat on
chkconfig topbeat on
chkconfig packetbeat on
service filebeat start
service topbeat start
service packetbeat start

Ubuntu Client Machines

The nice thing is the configs stay pretty much the same across platforms, here’s Ubuntu:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
# Elevate
sudo su

# Add the Repository
echo "deb https://packages.elastic.co/beats/apt stable main" | tee -a /etc/apt/sources.list.d/beats.list
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | apt-key add -

# Install the Software
apt-get update -y
apt-get install filebeat topbeat packetbeat -y

# Add the ca certificate that signed the logstash certificate
cat <<EOF > /etc/pki/tls/certs/logstash-forwarder.crt
-----BEGIN CERTIFICATE-----
<certificate data>
-----END CERTIFICATE-----
EOF

# Configure Filebeat
cat <<EOF > /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/*/*.log
- /var/log/*/*/*.log
input_type: log
registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: filebeat
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Topbeat
cat <<EOF > /etc/topbeat/topbeat.yml
input:
period: 10
procs: [".*"]
stats:
output:
logstash:
hosts: ["logstash.example.com:5044"]
index: topbeat
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Configure Packetbeat
cat <<EOF > /etc/packetbeat/packetbeat.yml
interfaces:
device: any
protocols:
dns:
ports: [53]
include_authorities: true
include_additionals: true
http:
ports: [80, 8080, 8000, 5000, 8002]
hide_keywords: ['pass', 'password', 'passwd']
memcache:
ports: [11211]
mysql:
ports: [3306]
pgsql:
ports: [5432]
redis:
ports: [6379]
thrift:
ports: [9090]
mongodb:
ports: [27017]

procs:
enabled: true
monitored:
- process: pgsql
cmdline_grep: postgres
- process: nginx
cmdline_grep: nginx

output:
logstash:
hosts: ["logstash.example.com:5044"]
index: packetbeat
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
EOF

# Start the Services
service filebeat start
service topbeat start
service packetbeat start

Windows Client Machines

Finally here’s what I did for enabling my Windows servers to start logging. It involves installing the winlogbeat on top of any other beats you want (I’ve got filebeat here):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# Download Winlogbeat and extract into C:\Program Files\Winlogbeat
################################################
## C:\Program Files\Winlogbeat\winlogbeat.yml ##
################################################
winlogbeat:
registry_file: C:/ProgramData/winlogbeat/.winlogbeat.yml
event_logs:
- name: Application
ignore_older: 72h
- name: Security
- name: System
output:
logstash:
hosts: ["logstash.example.com:5044"]
index: winlogbeat
tls:
certificate_authorities: ["C:/Program Files/Winlogbeat/ca.cer"]
shipper:
logging:
to_files: true
files:
rotateeverybytes: 10485760 # = 10MB
level: info

# Download Topbeat and extract into C:\Program Files\Topbeat
##########################################
## C:\Program Files\Topbeat\topbeat.yml ##
##########################################
input:
period: 10
procs: [".*"]
stats:
output:
logstash:
hosts: ["logstash.example.com:5044"]
index: topbeat
tls:
certificate_authorities: ["C:/Program Files/Topbeat/ca.cer"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
1
2
3
4
5
6
7
#######################
## Install the Beats ##
#######################
PS C:\Program Files\Winlogbeat> .\install-service-winlogbeat.ps1
PS C:\Program Files\Topbeat> .\install-service-topbeat.ps1
net start winlogbeat
net start topbeat

Now you should have a nice stream of data coming in for viewing with Kibana. I won’t go into setting up dashboards and the like with the product, as I find it’s just easier to play with it yourself and work out what you want to be monitoring.