1# @Time    : 2020-05-02
2# @Language: Markdown
3# @Software: VS Code
4# @Author  : Di Wang
5# @Email   : [email protected]

为了方便配置,(其实是懒得研究PuppetChef,和Ansible这些自动化管理工具),把一些我常用的配置项在这里列一下。所有内容只针对CentOS 7。ES版本 7.6。

集群设置

VMware 15 创建四个 CentOS 7虚拟机(我只有16G内存,所以不使用GUI,设置了系统默认运行级别为multi-user.target,通过ssh进行操作),IP都使用NAT。

节点IP说明
Host Machine192.168.1.1Windows 10
node1192.168.1.111生产集群,运行ES,Kibana
node2192.168.1.112生产集群,运行ES
node3192.168.1.113生产集群,运行ES
node4192.168.1.114监控集群,运行ES,Kibana

ES

安装和预配置,(shasum命令如果不存在,则sudo yum install perl-Digest-SHA

 1mkdir -p ~/elk/downloads
 2cd ~/elk/downloads
 3
 4wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm
 5wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm.sha512
 6shasum -a 512 -c elasticsearch-7.6.2-x86_64.rpm.sha512 
 7sudo rpm --install elasticsearch-7.6.2-x86_64.rpm
 8
 9sudo systemctl start elasticsearch.service
10sudo systemctl status elasticsearch.service
11sudo systemctl stop elasticsearch.service

安装完成之后会自动生成名为elasticsearch的用户和组,因为配置文件的目录/etc/elasticsearch/权限是740,同组的可读,所以可以把自己添加到组里,方便查看配置文件,修改还是需要当然root

sudo usermod -a -G elasticsearch yourusername

/etc/elasticsearch/elasticsearch.yml:

 1# cluster
 2cluster.name: elasticsearch-dev
 3cluster.remote.connect: false
 4# node
 5node.name: node1
 6node.master: true
 7node.voting_only: false
 8node.data: true
 9node.ingest: true
10node.ml: false
11xpack.ml.enabled: false
12# path
13path.data: /var/lib/elasticsearch
14path.logs: /var/log/elasticsearch
15# memory
16bootstrap.memory_lock: true
17# network
18network.host: 0.0.0.0
19http.port: 9200
20# discovery
21discovery.seed_hosts: ["192.168.1.111", "192.168.1.112:9300", "192.168.1.113"]
22cluster.initial_master_nodes: ["node1", "node2", "node3"]
23# gateway
24gateway.expected_nodes: 3
25gateway.expected_master_nodes: 3
26# gateway.expected_data_nodes: 3
27gateway.recover_after_time: 5m
28gateway.recover_after_nodes: 3
29# xpack monitoring
30xpack.monitoring.enabled: true
31xpack.monitoring.collection.enabled: true

jvm.options,默认为1 GB

1-Xms512m
2-Xmx512m

一些系统设置:可能也不需要设置

 1# 这个需要设置!!!
 2sudo systemctl edit elasticsearch
 3# 添加
 4[Service]
 5LimitMEMLOCK=infinity
 6# 执行
 7sudo systemctl daemon-reload
 8-------------------------------------
 9# 这些因人而异:
10sysctl vm.max_map_count
11ulimit -a
12vim /etc/security/limits.conf
13# 文件添加
14elasticsearch  -  nofile  65535
15# 下面这俩也是配置文件
16vim /etc/sysconfig/elasticsearch
17vim /usr/lib/systemd/system/elasticsearch.service

最终通过检查是否设置成功。

1curl -X GET "192.168.1.113:9200/_nodes?filter_path=**.mlockall&pretty"
2curl -X GET "192.168.1.113:9200/_nodes/stats/process?filter_path=**.max_file_descriptors&pretty"

Kibana

安装

1cd ~/elk/downloads
2wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.2-x86_64.rpm
3shasum -a 512 kibana-7.6.2-x86_64.rpm 
4sudo rpm --install kibana-7.6.2-x86_64.rpm
5
6sudo systemctl start kibana.service
7sudo systemctl stop kibana.service

/etc/kibana/kibana.yml

1server.port: 5601
2server.host: 192.168.1.111
3server.name: "Kibana-dev"
4
5elasticsearch.hosts: ["http://192.168.1.111:9200", "http://192.168.1.112:9200", "http://192.168.1.113:9200"]
6xpack.monitoring.enabled: true
7xpack.monitoring.kibana.collection.enabled: true

至此,开启集群就可以正常运行,且集群内可以监控数据。

Logstash

//TODO

Beat

Filebeat

安装

1curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm
2sudo rpm -vi filebeat-7.6.2-x86_64.rpm

这篇文章写的挺好,我自己就不总结了。FileBeat-Log 相关配置指南

filebeat.yml

input 和 output都可以设置pipeline,如果都设置会使用input中的pipeline,ES官方建议在input中,因为 this option usually results in simpler configuration files

通过ES的Ingest Node可以使用pipeline,在pipeline内定义了不同的processors。同时也可以直接在filebeat中定义processor,注意这两个地方的processors并不一样。前者的内容更丰富。

 1filebeat.inputs:
 2- type: log
 3  enabled: true
 4  paths:
 5    - /var/log/*.log
 6    #- c:\programdata\elasticsearch\logs\*
 7    # exclude_lines: ['^DBG']
 8    # exclude_files: ['.gz$']
 9  fields: 
10    logtype: test2
11    
12- type: log
13  enabled: true
14  paths:
15    - /var/another_log/*.log
16    #- c:\programdata\elasticsearch\logs\*
17    # exclude_lines: ['^DBG']
18    # exclude_files: ['.gz$']
19  fields: 
20    logtype: test1
21  # 以下几项都是默认值,根据需要修改
22  encoding: plain
23  ignore_older: "2h"
24  close_inactive: "1m"
25  close_renamed: false
26  close_eof: false
27  close_removed: true
28  close_timeout: "0"
29  harvester_limit: 0
30  scan_frequency: "10s"
31  # pipeline需要在集群内通过API设置
32  pipeline: "my_pipeline"
33
34setup.ilm.enabled: false
35#setup.ilm.rollover_alias: "test-%{[agent.version]}"
36#setup.ilm.pattern: "%{now/d}-000001"
37#setup.ilm.policy_name: "my-policy-%{[agent.version]}"
38
39output.elasticsearch:
40  # Array of hosts to connect to.
41  hosts: ["localhost:9200"]
42  indices:
43    - index: "filebeat-%{[agent.version]}-test1-%{+yyyy.MM.dd}"
44      when.contains:
45        fields.logtype: "test1"
46    - index: "filebeat-%{[agent.version]}-test2-%{+yyyy.MM.dd}"
47      when.contains:
48        fields.logtype: "test2"
49logging.level: info
50logging.to_files: true
51logging.files:
52  path: /var/log/filebeat
53  name: filebeat
54  keepfiles: 7
55  permissions: 0644
56  
57monitoring.enabled: true

写完一个pipeline,可以通过以下_simulate API进行测试,测试没问题之后写入集群,一例:

 1POST /_ingest/pipeline/_simulate?pretty
 2{
 3  "pipeline": {
 4    "description": "_description",
 5    "processors": [
 6      {
 7        "csv": {
 8          "field": "message",
 9          "target_fields": [
10            "NTPtime",
11            "NTPts",
12            "PTPtime",
13            "PTPts",
14            "databuffer_time",
15            "databuffer_ts",
16            "255_1",
17            "255_2",
18            "shot",
19            "pre_trigger_delay_1",
20            "pre_trigger_delay_2",
21            "mr_bkt_1",
22            "mr_bkt_2",
23            "dr_bkt",
24            "D8",
25            "D9",
26            "rf_phase_her",
27            "rf_phase_ler",
28            "c_mode",
29            "n_bmode",
30            "nn_bmode"
31          ],
32          "trim": true
33        },
34        "remove": {
35          "field": "message"
36        }
37      }
38    ]
39  },
40  "docs": [
41    {
42      "_index": "index",
43      "_id": "id",
44      "_source": {
45        "message": "2020/04/04 00:01:53.369591,3668770913.369590,2020/04/04 00:02:30.363358,3668770950.363357,2020/04/04 00:01:53.370848,3668770913.370847,255,255,25026,0,0,245,230,0,0,50544,1038,36492,42,31,180,60032,1053"
46      }
47    },
48    {
49      "_index": "index",
50      "_id": "id",
51      "_source": {
52        "message": "2020/04/04 00:01:53.429934,3668770913.429934,2020/04/04 00:02:30.423227,3668770950.423226,2020/04/04 00:01:53.430860,3668770913.430860,255,255,25029,0,0,686,117,0,0,50544,1038,36492,182,41,30,60032,1053"
53      }
54    }
55  ]
56}

Metricbeat

因为发现ES和kibana默认的监控也大体够用(metricbeat提供的内容更丰富),以及自己机器内存有限,开俩集群耗电太高(在家办公时刻心疼电费)。所以下面的内容就没测试。

ES 推荐使用单独的集群和单独的kibana实例监控生产集群,事实上这也是非常必要的,因为一旦生产集群出现问题时,监控也可能会产生问题,那监控就没有意义了。

如果不想使用单独的集群,那直接配置xpack.monitoring.collection.enabled就可以。

以下的配置是用来介绍分开监控的情况,有两个集群:生产集群和监控集群,

在生产集群禁用默认的ES监控:

1PUT _cluster/settings
2{
3  "persistent": {
4    "xpack.monitoring.elasticsearch.collection.enabled": false
5  }
6}

安装和配置

1curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.6.2-x86_64.rpm
2sudo rpm -vi metricbeat-7.6.2-x86_64.rpm
3metricbeat modules enable elasticsearch-xpack

/etc/metricbeat/metricbeat.yml

 1metricbeat.config.modules:
 2  path: ${path.config}/modules.d/*.yml
 3  reload.enabled: false
 4setup.template.settings:
 5  index.number_of_shards: 1
 6  index.codec: best_compression
 7setup.dashboards.enabled: true
 8setup.kibana:
 9  # 把 dashboard 导入监控集群的kibana实例
10  host: "192.168.1.114:5601"
11output.elasticsearch:
12  # 此处指定为监控集群
13  hosts: ["192.168.1.114:9200"]

modules.d/elasticsearch-xpack.yml

 1 - module: elasticsearch
 2    metricsets:
 3      - ccr
 4      - cluster_stats
 5      - index
 6      - index_recovery
 7      - index_summary
 8      - ml_job
 9      - node_stats
10      - shard
11      - enrich
12    period: 10s
13    hosts: ["http://192.168.1.111:9200"]
14    #username: "user"
15    #password: "secret"
16    xpack.enabled: true

测试

1curl -XGET 'http://192.168.1.114:9200/metricbeat-*/_search?pretty'