本篇内容主要讲解“ELKB5.2.2集群环境的部署过程”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“ELKB5.2.2集群环境的部署过程”吧!ELKB5.2.2集群环境部署本人陆陆续续接触了ELK的1.4,2.0,2.4,5.0,5.2版本,可以说前面使用当中一直没有太多感触,最近使用5.2才慢慢有了点感觉,可见认知事务的艰难,本次文档尽量详细点,现在写文档越来越喜欢简洁了,不知道是不是不太好。不扯了看正文(注意这里的配置是优化前配置,正常使用没问题,量大时需要优化)。备注:本次属于大版本变更,有很多修改,部署重大修改如下:1,filebeat直接输出kafka,并drop不必要的字段如beat相关的2,elasticsearch集群布局优化:分三master节点6data节点3,logstash filter 加入urldecode支持url、reffer、agent中文显示4,logstash fileter加入geoip支持客户端ip区域城市定位功能5, logstash mutate替换字符串并remove不必要字段如kafka相关的5,elasticsearch插件需开发云主机域名要另外部署node.js,不能像以前一样集成一起6,nginx日志新增request参数、请求方法一,架构可选架构filebeat–elasticsearch–kibanafilebeat–logstash–kafka–logstash–elasticsearch–kibanafilebeat–kafka–logstash–elasticsearch–kibana由于filebeat5.2.2支持多种输出logstash、elasticsearch、kafka、redis、syslog、file等,为了优化资源使用率且能够支持大并发场景选择filebeat(18)–kafka(3)–logstash(3)–elasticsearch(3)–kibana(3–nginx负载均衡共3台物理机、12台虚拟机、系统CentOS6.8、具体划分如下:二,环境准备设置host环境kafka需要用到cat /etc/hosts三,部署elasticsearch集群mkdir /data/esnginxmkdir /data/eslogrpm -ivh /srv/elasticsearch-5.2.2.rpmchkconfig –add elasticsearchchkconfig postfix offrpm -ivh /srv/kibana-5.2.2-x86_64.rpmchown elasticsearch:elasticsearch /data/eslog -Rchown elasticsearch:elasticsearch /data/esnginx -R配置文件(3master+6data)[root@ES191 elasticsearch]# cat elasticsearch.yml|grep -Ev ‘^#|^$’特别注意启动集群service elasticsearch start健康检查elasticsearch-head插件http://192.168.188.215:9100/连接上面192.168.188.191:9200任意一台即可设置分片官方建议生成索引时再设置curl -XPUT ‘http://192.168.188.193:9200/_all/_settings?preserve_existing=true’ -d ‘{ “index.number_of_replicas” : “1”, “index.number_of_shards” : “6”}’没有生效,后来发现这个分片设置可以在模版创建时指定,目前还是使用默认1副本,5分片。其他报错(这个只是参考,优化时有方案)bootstrap.system_call_filter: false # 针对 system call filters failed to install,参见 https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html[WARN ][o.e.b.JNANatives ] unable to install syscall filter:java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in四、部署kafka集群kafka集群搭建1,zookeeper集群2,kafka集群wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgztar zxvf kafka_2.11-0.10.0.1.tgz -C /usr/local/ln -s /usr/local/kafka_2.11-0.10.0.1 /usr/local/kafkadiff了下server.properties和zookeeper.properties变动不大可以直接使用vim /usr/local/kafka/config/server.propertiesmkdir /data/kafkalog修改内存使用大小vim /usr/local/kafka/bin/kafka-server-start.sh export KAFKA_HEAP_OPTS=”-Xmx16G -Xms16G”启动kafka/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties创建六组前端topic/usr/local/kafka/bin活动:慈云数据爆款香港服务器,CTG+CN2高速带宽、快速稳定、平均延迟10+ms 速度快,免备案,每月仅需19元!! 点击查看/kafka-topics.sh –create –topic ngx1-168 –replication-factor 1 –partitions 3 –zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181/usr/local开发云主机域名/kafka/bin/kafka-topics.sh –create –topic ngx2-178 –replication-factor 1 –partitions 3 –zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181/usr/local/kafka/bin/kafka-topics.sh –create –topic ngx3-188 –replication-factor 1 –partitions 3 –zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181检查topic/usr/local/kafka/bin/kafka-topics.sh –list –zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181ngx1-168ngx2-178ngx3-1883,开机启动cat /etc/rc.local/usr/local/zookeeper/bin/zkServer.sh start/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties注意:开机启动如果设置在rc.local里,java安装又不是用yum安装的openjdk-1.8.0时,需要指定JAVA_HOME,否则java环境不生效,受java环境影响的zookeeper与kafka服务也启动不了,因为java环境一般配置在/etc/profile里,它的生效时间在rc.l开发云主机域名ocal后。
五,部署配置logstash安装rpm -ivh logstash-5.2.2.rpmmkdir /usr/share/logstash/config#1. 复制配置文件到logstash homecp /etc/logstash /usr/share/logstash/config#2. 配置路径vim /usr/share/logstash/config/logstash.yml修改前:path.config: /etc/logstash/conf.d修改后:path.config: /usr/share/logstash/config/conf.d#3.修改 startup.options修改前:LS_SETTINGS_DIR=/etc/logstash修改后:LS_SETTINGS_DIR=/usr/share/logstash/config修改startup.options需要执行/usr/share/logstash/bin/system-install 生效配置消费者输出端3个logstash只负责一部分in-kafka-ngx1-out-es.confin-kafka-ngx2-out-es.confin-kafka-ngx3-out-es.conf[root@logstash297 conf.d]# cat in-kafka-ngx1-out-es.confnginx 模版[root@logstash297 logstash]# cat /usr/share/logstash/templates/nginx_template启动/usr/share/logstash/bin/logstash -f /usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf &默认logstash开机启动参考/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md报错处理[2017-05-08T12:24:30,388][ERROR][logstash.inputs.kafka ] Unknown setting ‘zk_connect’ for kafka[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ] Unknown setting ‘topic_id’ for kafka[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ] Unknown setting ‘reset_beginning’ for kafka[2017-05-08T12:24:30,395][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>”Something is wrong with your configuration.”}验证日志[root@logstash297 conf.d]# cat /var/log/logstash/logstash-plain.log六,部署配置filebeat安装rpm -ivh filebeat-5.2.2-x86_64.rpmnginx日志格式需要为json的配置filebeatvim /etc/filebeat/filebeat.yml
本文从转载,原作者保留一切权利,若侵权请联系删除。
《ELKB5.2.2集群环境的部署过程》来自互联网同行内容,若有侵权,请联系我们删除!
还没有评论,来说两句吧...