Changes

Kafka with ELK on swarm

3,944 bytes added, 17:35, 2 March 2019
no edit summary
 
=zookeeper=
 
:[[File:ClipCapIt-190302-174008.PNG]]
A zookeeper egy elosztott konfiguráció manager, ...
 
a zookeeper cluster-t ensemble-nek (ánszámbol) hívják ami együttest jelent. A quorum (minimum létszám a szavazás képességhez) miatt mindig páratlan számú node-ot kell a cluster-be rakni, 3-at, 5-öt vagy 7-et. Hét fölé már performancia okokból nem érdemes menni. 3 esetén 1 node kiesését viseli el a cluster, 5 node esetén 2-öt.
 
Mi egy három tagú zookeeper ensembe-t fogunk készíteni. Minden egyes cluster tagnak ugyan az kell legyen a konfigurációja. </br>
/conf/zoo.cfg
<pre>
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
</pre>
 
* clientPort
* server.X=zookeeper1:peerPort:leaderPort
** X:
** peerPort:
** leaderPort:
 
 
 
==Web-GUI==
https://github.com/qiuxiafei/zk-web<br>
 
/app/zk-web/conf/zk-web-conf.clj
<pre>
{
:server-port 8080
:users {
"admin" "12345"
;; map of user -> password
;; you can add more
}
:default-node "zookeeper1:2181/"
}
</pre>
 
Egyszerre mindig csak egy zookeeper node-hoz tud csatlakozni, de új kapcsolatokat meg lehet adni a web-es gui-n keresztül. A default-node paraméterrel meg lehet adni, hogy melyik zookeeper node legyen az alapértelmezett az web-gui indulásakor. Ezt amúgy csak akkor kell a felületen átállítani egy másik node-ra, ha a zookeeper1 kiesne karbantartás vagy hiba miatt.
 
 
 
==Swarm stack==
 
<source lang="jaml">
version: '3'
 
 
 
services:
zookeeper1:
image: zookeeper
networks:
- kafa-net
volumes:
- "zookeeper1-conf:/conf"
- "zookeeper1-data:/data"
- "zookeeper1-datalog:/datalog"
deploy:
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
resources:
reservations:
memory: 100m
...
zookeeper-gui:
image: tobilg/zookeeper-webui
networks:
- kafa-net
volumes:
- "zookeeper-gui:/app/zk-web/conf"
ports:
- 8089:8080
deploy:
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
 
networks:
kafa-net:
driver: overlay
 
volumes:
zookeeper1-conf:
driver: nfs
driver_opts:
share: 192.168.42.1:/home/adam/dockerStore/zookeeper/node1/conf/
...
zookeeper1-data:
driver: nfs
driver_opts:
share: 192.168.42.1:/home/adam/dockerStore/zookeeper/node1/data/
...
zookeeper1-datalog:
driver: nfs
driver_opts:
share: 192.168.42.1:/home/adam/dockerStore/zookeeper/node1/datalog/
...
zookeeper-gui:
driver: nfs
driver_opts:
share: 192.168.42.1:/home/adam/dockerStore/zookeeper/zk-web/
</source>
 
==Produkciós futtatás==
 
 
In a typical production use case, a minimum of 8 GB of RAM should be dedicated for ZooKeeper use. Note that ZooKeeper is sensitive to swapping and any host running a ZooKeeper server should avoid swapping.
 
you should consider providing a dedicated CPU core to ensure context switching is not an issue.
 
Disk performance is vital to maintaining a healthy ZooKeeper cluster. Solid state drives (SSD) are highly recommended as ZooKeeper must have low latency disk writes in order to perform optimally. Each request to ZooKeeper must be committed to to disk on each server in the quorum before the result is available for read. A dedicated SSD of at least 64 GB in size on each ZooKeeper server is recommended for a production deployment
 
ZooKeeper runs as a JVM. It is not notably heap intensive when running for the Kafka use case. A heap size of 1 GB is recommended for most use cases and monitoring heap usage to ensure no delays are caused by garbage collection.
 
 
 
 
 
 
 
=Kafka=