docker日志采集log-pilot+kafka+ELK

docker 日志采集log-pilot+ELK

多节点情况下,每个节点安装log-pilot分别收集日志,在并发情况下logstash有可能会出现阻塞情况,所以加速kafka保证日志不会丢失。

log-pilot->kafka->logstash->es->kibana

docker-compose.yml

接入kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pilot:
image: registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /:/host
privileged: true
environment:
PILOT_TYPE: filebeat
FILEBEAT_OUTPUT: kafka
KAFKA_BROKERS: "1.1.1.1:6667","2.2.2.2:6667","3.3.3.3:6667"#"\"xxxxx:6667\",\"xxxxx:6667\",\"xxxxxx:6667\","
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
cpu_shares: 512 #限制CPU使用份额权重,默认1024
mem_limit: 600m #内存限制600M
memswap_limit: 800m #内存+swap限制800M
blkio_config:
weight: 300 #限制读写磁盘带宽,默认500,取值0-1000

接入logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
pilot:
image: registry.cn-hangzhou.aliyuncs.com/acs/log-pilot:0.9.7-filebeat
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /:/host
privileged: true
environment:
PILOT_TYPE: filebeat
LOGGING_OUTPUT: logstash
LOGSTASH_HOST: ""
LOGSTASH_PORT: 5044
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
cpu_shares: 512 #限制CPU使用份额权重,默认1024
mem_limit: 600m #内存限制600M
memswap_limit: 800m #内存+swap限制800M
blkio_config:
weight: 300 #限制读写磁盘带宽,默认500,取值0-1000
1
2
3
4
5
6
7
8
9
10
11
logstash:
image: hub.c.163.com/muxiyue/logstash:6.4.0
hostname: logstash
environment:
- "xpack.monitoring.elasticsearch.url=http://elasticsearch:9200"
- "xpack.monitoring.enabled=true"
- "xpack.monitoring.elasticsearch.username=elastic"
- "xpack.monitoring.elasticsearch.password=elastic"
- "LS_JAVA_OPTS=-Xmx2g"
volumes:
- /etc/localtime:/etc/localtime:ro
1
2
3
4
5
6
7
8
9
10
11
12
13
14
tomcat:
image: tomcat:latest
ports:
- 8080:8080
volumes:
- /usr/local/tomcat/logs
labels:
- "aliyun.logs.catalina=stdout"
- "aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt"
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
-------------本文结束-------------
0%