elasticsearch 集群

前言

elasticsearch 集群搭建。

搭建环境

  1. jdk 1.8.0_161
  2. elasticsearch-5.6.8
  3. 同一网段的三台虚拟机:192.168.1.111, 192.168.1.112, 192.168.1.113

节点划分

由于只有三台虚拟机,所以没有必要将节点职责划分太细,每个节点都完全具备以下能力;不调整es节点的属性默认配置即可。

192.168.1.111:Master-eligible node,Data node,Client node,Ingest node
192.168.1.112:Master-eligible node,Data node,Client node,Ingest node
192.168.1.113:Master-eligible node,Data node,Client node,Ingest node

使用默认配置:

node.master: true
node.data: true
node.ingest: true
search.remote.connect: true

集群配置

统一集群的名称:

cluster.name: lpp-elasticsearch-cluster

192.168.1.111

cluster.name: lpp-elasticsearch-cluster
node.name: node-1
network.host: 192.168.1.111
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["192.168.1.112", "192.168.1.113"]

192.168.1.112

cluster.name: lpp-elasticsearch-cluster
node.name: node-2
network.host: 192.168.1.112
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["192.168.1.111", "192.168.1.113"]

192.168.1.113

cluster.name: lpp-elasticsearch-cluster
node.name: node-3
network.host: 192.168.1.113
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["192.168.1.111", "192.168.1.112"]

集群状态

http://192.168.1.111:9200/_cluster/health?pretty

{
"cluster_name" : "lpp-elasticsearch-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 5,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
status 含义
green 所有的主分片和副本分片都正常运行
yellow 所有的主分片都正常运行,但不是所有的副本分片都正常运行
red 有主分片没能正常运行

遇到的问题

节点间无法通信

[2018-04-18T03:43:49,714][INFO ][o.e.d.z.ZenDiscovery     ] [node-2] failed to send join request to master [{node-1}{XCSxTQC0QquDZAMR9OdGww}{Nr10AA1vS0ydVQ51scSXnw}{192.168.1.111}{192.168.1.111:9300}], reason [RemoteTransportException[[node-1][192.168.1.111:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[node-2][192.168.1.112:9300] connect_timeout[30s]]; nested: IOException[No route to host: 192.168.1.112/192.168.1.112:9300]; nested: IOException[No route to host]; ]
[2018-04-18T03:43:52,736][WARN ][o.e.d.z.ZenDiscovery ] [node-2] failed to connect to master [{node-3}{F9nhxs1wR_SqXrvrAL-hIQ}{713Cq6OEQuac-Lr_DfF4Bg}{192.168.1.113}{192.168.1.113:9300}], retrying...
org.elasticsearch.transport.ConnectTransportException: [node-3][192.168.1.113:9300] connect_timeout[30s]
at org.elasticsearch.transport.netty4.Netty4Transport.connectToChannels(Netty4Transport.java:363) ~[?:?]
at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:570) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.transport.TcpTransport.connectToNode(TcpTransport.java:473) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:342) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:329) ~[elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:458) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:410) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4100(ZenDiscovery.java:82) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1188) [elasticsearch-5.6.8.jar:5.6.8]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:575) [elasticsearch-5.6.8.jar:5.6.8]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: 192.168.1.113/192.168.1.113:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
... 1 more
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:?]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352) ~[?:?]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[?:?]
... 1 more

防火墙未关闭的原因导致的,在centos7下关闭防火墙并重启机器即可。生产环境为了安全性,配置防火墙只对9300,9200等通信端口支持访问即可。

# 1.停止firewall
sudo systemctl stop firewalld.service
# 2. 禁止firewall开机启动
sudo systemctl disable firewalld.service
# 3. 重启机器

参考链接

  1. https://www.elastic.co/guide/cn/elasticsearch/guide/current/_cluster_health.html