新聞中心
這里有您想知道的互聯(lián)網(wǎng)營銷解決方案
詳解容器部署ELK7.10,適用于生產(chǎn)
一、elk架構(gòu)簡介

成都創(chuàng)新互聯(lián)公司制作網(wǎng)站網(wǎng)頁找三站合一網(wǎng)站制作公司,專注于網(wǎng)頁設(shè)計,網(wǎng)站制作、成都網(wǎng)站建設(shè),網(wǎng)站設(shè)計,企業(yè)網(wǎng)站搭建,網(wǎng)站開發(fā),建網(wǎng)站業(yè)務(wù),680元做網(wǎng)站,已為上1000+服務(wù),成都創(chuàng)新互聯(lián)公司網(wǎng)站建設(shè)將一如既往的為我們的客戶提供最優(yōu)質(zhì)的網(wǎng)站建設(shè)、網(wǎng)絡(luò)營銷推廣服務(wù)!
- 首先 logstash 具有日志采集、過濾、篩選等功能,功能完善但同時體量也會比較大,消耗系統(tǒng)資源自然也多。filebeat作為一個輕量級日志采集工具,雖然沒有過濾篩選功能,但是僅僅部署在應(yīng)用服務(wù)器作為我們采集日志的工具可以是說最好的選擇。但我們有些時候可能又需要logstash的過濾篩選功能,所以我們在采集日志時用filebeat,然后交給logstash過濾篩選。
- 其次,logstash的吞吐量是有限的,一旦短時間內(nèi)filebeat傳過來的日志過多會產(chǎn)生堆積和堵塞,對日志的采集也會受到影響,所以在filebeat與logstash中間又加了一層kafka消息隊列來緩存或者說解耦,當(dāng)然redis也是可以的。這樣當(dāng)眾多filebeat節(jié)點采集大量日志直接放到kafka中,logstash慢慢的進行消費,兩邊互不干擾。
- 至于zookeeper,分布式服務(wù)管理神器,監(jiān)控管理kafka的節(jié)點注冊,topic管理等,同時彌補了kafka集群節(jié)點對外界無法感知的問題,kafka實際已經(jīng)自帶了zookeeper,這里將會使用獨立的zookeeper進行管理,方便后期zookeeper集群的擴展。
二、環(huán)境
- 阿里云ECS:5臺部署ES節(jié)點,3臺分別部署logstash、kafka、zookeeper和kibana等服務(wù)。
- 阿里云ECS配置:5臺 4核16G SSD磁盤。3臺 4核16G SSD磁盤。都是 Centos7.8系統(tǒng)
- 安裝 docker 和 docker-compose
- ELK版本7.10.1;zookeeper版本3.6.2;kafka版本2.13-2.6.0;
三、系統(tǒng)參數(shù)優(yōu)化
- # 最大用戶打開進程數(shù)
- $ vim /etc/security/limits.d/20-nproc.conf
- * soft nproc 65535
- * hard nproc 65535
- # 優(yōu)化內(nèi)核,用于 docker 支持
- $ modprobe br_netfilter
- $ cat <
/etc/sysctl.d/k8s.conf - net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.ipv4.ip_forward = 1
- EOF
- $ sysctl -p /etc/sysctl.d/k8s.conf
- # 優(yōu)化內(nèi)核,對 es 支持
- $ echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
- # 生效配置
- $ sysctl -p
四、部署 docker 和 docker-compose
部署 docker
- # 安裝必要的一些系統(tǒng)工具
- $ yum install -y yum-utils device-mapper-persistent-data lvm2
- # 添加軟件源信息
- $ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- # 更新并安裝 Docker-CE
- $ yum makecache fast
- $ yum -y install docker-ce
- # 配置docker
- $ systemctl enable docker
- $ systemctl start docker
- $ vim /etc/docker/daemon.json
- {"data-root": "/var/lib/docker", "bip": "10.50.0.1/16", "default-address-pools": [{"base": "10.51.0.1/16", "size": 24}], "registry-mirrors": ["https://4xr1qpsp.mirror.aliyuncs.com"], "log-opts": {"max-size":"500m", "max-file":"3"}}
- $ sed -i '/ExecStart=/i ExecStartPost=\/sbin\/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
- $ systemctl enable docker.service
- $ systemctl daemon-reload
- $ systemctl restart docker
部署 docker-compose
- # 安裝 docker-compose
- $ sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- $ chmod +x /usr/local/bin/docker-compose
五、部署 ES
es-master1 操作
- # 創(chuàng)建 es 目錄
- $ mkdir /data/ELKStack
- $ mkdir elasticsearch elasticsearch-data elasticsearch-plugins
- # 容器es用戶 uid 和 gid 都是 1000
- $ chown 1000.1000 elasticsearch-data elasticsearch-plugins
- # 臨時啟動一個es
- $ docker run --name es-test -it --rm docker.elastic.co/elasticsearch/elasticsearch:7.10.1 bash
- # 生成證書,證書有效期10年,證書輸入的密碼這里為空
- $ bin/elasticsearch-certutil ca --days 3660
- $ bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --days 3660
- # 打開新的窗口,拷貝生成的證書
- $ cd /data/ELKStack/elasticsearch
- $ mkdir es-p12
- $ docker cp es-test:/usr/share/elasticsearch/elastic-certificates.p12 ./es-p12
- $ docker cp es-test:/usr/share/elasticsearch/elastic-stack-ca.p12 ./es-p12
- $ chown -R 1000.1000 ./es-p12
- # 創(chuàng)建 docker-compose.yml
- $ vim docker-compose.yml
- version: '2.2'
- services:
- elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
- container_name: es01
- environment:
- - cluster.name=es-docker-cluster
- - cluster.initial_master_nodes=es01,es02,es03
- - bootstrap.memory_lock=true
- - "ES_JAVA_OPTS=-Xms10000m -Xmx10000m"
- ulimits:
- memlock:
- soft: -1
- hard: -1
- nofile:
- soft: 65536
- hard: 65536
- mem_limit: 13000m
- cap_add:
- - IPC_LOCK
- restart: always
- # 設(shè)置 docker host 網(wǎng)絡(luò)模式
- network_mode: "host"
- volumes:
- - /data/ELKStack/elasticsearch-data:/usr/share/elasticsearch/data
- - /data/ELKStack/elasticsearch-plugins:/usr/share/elasticsearch/plugins
- - /data/ELKStack/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- - /data/ELKStack/elasticsearch/es-p12:/usr/share/elasticsearch/config/es-p12
- # 創(chuàng)建 elasticsearch.yml 配置文件
- $ vim elasticsearch.yml
- cluster.name: "es-docker-cluster"
- node.name: "es01"
- network.host: 0.0.0.0
- node.master: true
- node.data: true
- discovery.zen.minimum_master_nodes: 2
- http.port: 9200
- transport.tcp.port: 9300
- # 如果是多節(jié)點es,通過ping來健康檢查
- discovery.zen.ping.unicast.hosts: ["172.20.166.25:9300", "172.20.166.24:9300", "172.20.166.22:9300", "172.20.166.23:9300", "172.20.166.26:9300"]
- discovery.zen.fd.ping_timeout: 120s
- discovery.zen.fd.ping_retries: 6
- discovery.zen.fd.ping_interval: 10s
- cluster.info.update.interval: 1m
- indices.fielddata.cache.size: 20%
- indices.breaker.fielddata.limit: 40%
- indices.breaker.request.limit: 40%
- indices.breaker.total.limit: 70%
- indices.memory.index_buffer_size: 20%
- script.painless.regex.enabled: true
- # 磁盤分片分配
- cluster.routing.allocation.disk.watermark.low: 100gb
- cluster.routing.allocation.disk.watermark.high: 50gb
- cluster.routing.allocation.disk.watermark.flood_stage: 30gb
- # 本地數(shù)據(jù)分片恢復(fù)配置
- gateway.recover_after_nodes: 3
- gateway.recover_after_time: 5m
- gateway.expected_nodes: 3
- cluster.routing.allocation.node_initial_primaries_recoveries: 8
- cluster.routing.allocation.node_concurrent_recoveries: 2
- # 允許跨域請求
- http.cors.enabled: true
- http.cors.allow-origin: "*"
- http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
- # 開啟xpack
- xpack.security.enabled: true
- xpack.monitoring.collection.enabled: true
- # 開啟集群中https傳輸
- xpack.security.transport.ssl.enabled: true
- xpack.security.transport.ssl.verification_mode: certificate
- xpack.security.transport.ssl.keystore.path: es-p12/elastic-certificates.p12
- xpack.security.transport.ssl.truststore.path: es-p12/elastic-certificates.p12
- # 把 es 配置使用 rsync 同步到其它 es 節(jié)點
- $ rsync -avp -e ssh /data/ELKStack 172.20.166.24:/data/
- $ rsync -avp -e ssh /data/ELKStack 172.20.166.22:/data/
- $ rsync -avp -e ssh /data/ELKStack 172.20.166.23:/data/
- $ rsync -avp -e ssh /data/ELKStack 172.20.166.26:/data/
- # 啟動 es
- $ docker-compose up -d
- # 查看 es
- $ docker-compose ps
es-master2 操作
- $ cd /data/ELKStack/elasticsearch
- # 修改 docker-compose.yml elasticsearch.yml 兩個配置
- $ sed -i 's/es01/es02/g' docker-compose.yml elasticsearch.yml
- # 啟動 es
- $ docker-compose up -d
es-master3 操作
- $ cd /data/ELKStack/elasticsearch
- # 修改 docker-compose.yml elasticsearch.yml 兩個配置
- $ sed -i 's/es01/es03/g' docker-compose.yml elasticsearch.yml
- # 啟動 es
- $ docker-compose up -d
es-data1 操作
- $ cd /data/ELKStack/elasticsearch
- # 修改 docker-compose.yml elasticsearch.yml 兩個配置
- $ sed -i 's/es01/es04/g' docker-compose.yml elasticsearch.yml
- # 不做為 es master 節(jié)點,只做數(shù)據(jù)節(jié)點
- $ sed -i 's/node.master: true/node.master: false/g' elasticsearch.yml
- # 啟動 es
- $ docker-compose up -d
es-data2 操作
- $ cd /data/ELKStack/elasticsearch
- # 修改 docker-compose.yml elasticsearch.yml 兩個配置
- $ sed -i 's/es01/es05/g' docker-compose.yml elasticsearch.yml
- # 不做為 es master 節(jié)點,只做數(shù)據(jù)節(jié)點
- $ sed -i 's/node.master: true/node.master: false/g' elasticsearch.yml
- # 啟動 es
- $ docker-compose up -d
設(shè)置 es 訪問賬號
- # es-master1 操作
- $ docker exec -it es01 bash
- # 設(shè)置 elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user 等密碼
- # 密碼都設(shè)置為 elastic123,這里只是舉例,具體根據(jù)需求設(shè)置
- $ ./bin/elasticsearch-setup-passwords interactive
六、部署 Kibana
logstash3 操作
- $ mkdir -p /data/ELKStack/kibana
- $ cd /data/ELKStack/kibana
- # 創(chuàng)建 kibana 相關(guān)目錄,用于容器掛載
- $ mkdir config data plugins
- $ chown 1000.1000 config data plugins
- # 創(chuàng)建 docker-compose.yml
- $ vim docker-compose.yml
- version: '2'
- services:
- kibana:
- image: docker.elastic.co/kibana/kibana:7.10.1
- container_name: kibana
- restart: always
- network_mode: "bridge"
- mem_limit: 2000m
- environment:
- SERVER_NAME: kibana.example.com
- ports:
- - "5601:5601"
- volumes:
- - /data/ELKStack/kibana/config:/usr/share/kibana/config
- - /data/ELKStack/kibana/data:/usr/share/kibana/data
- - /data/ELKStack/kibana/plugins:/usr/share/kibana/plugins
- # 創(chuàng)建 kibana.yml
- $ vim config/kibana.yml
- server.name: kibana
- server.host: "0"
- elasticsearch.hosts: ["http://172.20.166.25:9200","http://172.20.166.24:9200","http://172.20.166.22:9200"]
- elasticsearch.username: "kibana"
- elasticsearch.password: "elastic123"
- monitoring.ui.container.elasticsearch.enabled: true
- xpack.security.enabled: true
- xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects1234567890
網(wǎng)站名稱:詳解容器部署ELK7.10,適用于生產(chǎn)
URL分享:http://m.5511xx.com/article/dpeoshj.html


咨詢
建站咨詢
