ELK日志分析系统

部署环境

192.168.1.147 kibana、logstash、Escluster-node-1
192.168.1.151 filebeat、Escluster-node-2、nginx

软件

  1. elasticsearch-5.1.1.rpm
  2. filebeat-5.1.1-x86_64.rpm
  3. kibana-5.1.1-x86_64.rpm
  4. logstash-5.1.1.rpm
  5. jdk-8u121-linux-x64.tar.gz

filebeat

192.168.1.151节点部署filebeat 收集日志

直接使用rpm安装:
[root@baseos-2_192.168.1.151 ~]# rpm -ivh filebeat-5.1.1-x86_64.rpm
配置filebeat:
[root@baseos-2_192.168.1.151 ~]# vim /etc/filebeat/filebeat.yml 
filebeat:
  prospectors:
    -
      paths:
        - /data/logs/nginx_access.log
      input_type: log
      document_type: nginx-access
      tail_files: true
    -
      paths:
        - /data/logs/webserver_access.log
      input_type: log
      document_type: webserver-access
      tail_files: true
output:
  logstash:
    hosts: ["192.168.1.147:5044"]
    ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]

logstash

192.168.1.147节点部署logstash接受处理日志
logstash 需要java环境,jdk-8u121-linux-x64.tar.gz的安装略

直接使用rpm安装:
[root@elkserver_192.168.1.147 ~]# rpm -ivh logstash-5.1.1.rpm
配置logstash:
[root@elkserver_192.168.1.147  ~]# ln -s /etc/logstash /usr/share/logstash/config
[root@elkserver_192.168.1.147  ~]# ln -s /usr/share/logstash/bin/* /usr/local/bin/
# 调整jvm使用内存
[root@ ~]# vim /etc/logstash/jvm.options 
 -Xms256m
 -Xmx256m
# 修改logstash基本配置
[root@elkserver_192.168.1.147  ~]# vim /etc/logstash/logstash.yml 
pipeline:
  workers: 4
  batch:
    size: 125
    delay: 5
path.config: /etc/logstash/conf.d
path.logs: /data/logs/logstash
http.port: 9600
http.host: "192.168.1.147"
[root@elkserver_192.168.1.147 ~]# mkdir -p /data/logs/logstash 
[root@elkserver_192.168.1.147 ~]# chown logstash:logstash -R /data/logs/logstash 
[root@elkserver_192.168.1.147 ~]# vim /etc/logstash/conf.d/filebeat_nginx_ES.conf 
input {
    beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/logstash/logstash_server.crt"
        ssl_key => "/etc/logstash/logstash_server.key"
        ssl_verify_mode => "none"
    }
}

filter {
    grok {
        patterns_dir => "/etc/logstash/conf.d/patterns/mypattern"
        match => {
            "message" => "%{NGXACCESSLOG}"
        }
    }
    geoip {
        source => "client_ip"
        fields => "city_name"
        fields => "country_name"
        fields => "continent_code"
        fields => "continent_name"
        database => "/etc/logstash/ipdata.mmdb"
    }

}

output{
    elasticsearch {
        hosts => ["192.168.31.140:9200"]
        index  => "%{type}-log-%{+YYYY.MM.dd}"
        document_type => "%{type}"
        #user => elastic
        #password => 123456
    }
}


# 自定义正则
[root@elkserver_192.168.1.147 ~]# vim /etc/logstash/conf.d/patterns/mypattern
IPADDR [0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}
REQUESTPRO ([^"]*)
REQUESTPATH (?:/[\A-Za-z0-9$.+!*'(){},~:;=@#% []_<>^-&?]*)+

NGXACCESSLOG %{IPADDR:client_ip} - (%{USERNAME:user}|-) [%{HTTPDATE:request_timestamp}] "%{WORD:request_mathod} %{REQUESTPATH:request_path} %{REQUESTPRO:request_protocol}" %{NUMBER:http_status} %{NUMBER:body_bytes_sent} (%{GREEDYDATA:http_referer}|-) "%{DATA:http_user_agent}" "%{USERNAME:http_x_forwarded_for}"
启动logstash:
[root@elkserver_192.168.1.147 ~]# logstash -f /etc/logstash/conf.d/filebeat_nginx_ES.conf &

elasticsearch

192.168.1.147和192.168.1.151两个节点部署elasticsearch集群,其中192.168.1.147为master

安装elasticsearch

两个节点都一样

[root@elkserver_192.168.1.147 ~]# rpm -ivh elasticsearch-5.1.1.rpm 
[root@elkserver_192.168.1.147 ~]# vi /etc/security/limits.d/90-nproc.conf
* soft nproc 1024
#修改为
* soft nproc 2048
配置elasticsearch

调整jvm的内存,两个节点都一样

[root@elkserver_192.168.1.147 ~]# vim  /etc/elasticsearch/jvm.options  

配置elasticsearch

# 主节点
[root@elkserver_192.168.1.147 ~]#  cat  /etc/elasticsearch/elasticsearch.yml  | egrep -v "^$|^#"
cluster.name: ES-wangshenjin
node.name: node-1
path.data: /data/elasticsearch  
path.logs: /data/logs/elasticsearch
network.host: 192.168.1.147
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.151"]

[root@elkserver_192.168.1.147 ~]# mkdir -p /data/elasticsearch   /data/logs/elasticsearch
[root@elkserver_192.168.1.147 ~]# chown elasticsearch:elasticsearch -R /data/elasticsearch  /data/logs/elasticsearch

# 从节点
[root@baseos-2_192.168.1.151 ~]# cat  /etc/elasticsearch/elasticsearch.yml  | egrep -v "^$|^#"
cluster.name: ES-wangshenjin
node.name: node-2
path.data: /data/elasticsearch  
path.logs: /data/logs/elasticsearch
network.host: 192.168.1.151
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.147"]
[root@baseos-2_192.168.1.151 ~]#  mkdir -p /data/elasticsearch   /data/logs/elasticsearch
[root@baseos-2_192.168.1.151 ~]#  chown elasticsearch:elasticsearch -R /data/elasticsearch  /data/logs/elasticsearch
两边同时启动elasticsearch:
[root@elkserver_192.168.1.147 ~]# /etc/init.d/elasticsearch start

kibana

192.168.1.147 部署kibana

直接使用rpm安装:
[root@elkserver_192.168.1.147 ~]# rpm -ivh kibana-5.1.1-x86_64.rpm
配置kibana
[root@elkserver_192.168.1.147 ~]# cat /etc/kibana/kibana.yml | egrep -v '^$|^#'
server.port: 5601
server.host: "192.168.1.147"
server.name: "kibana.wangshenjin.com"
elasticsearch.url: "http://192.168.1.147:9200"
kibana.index: ".kibana"
kibana.defaultAppId: "discover"
启动kibana:
[root@elkserver_192.168.1.147 ~]# /etc/init.d/kibana start

登录kibana的web界面时,需要创建index pattern,直接输入nginx-access-log-*和webserver-access-log-*,然后点击create即可。