Changes

Openshift - HAproxy metrics EN

887 bytes removed, 12:56, 20 November 2019
no edit summary
<br>
=== Http test application ===
To generate the For generating http traffic, I made a test application that can generate arbitrary different response time and http response code requestscodes. Source available at here: https://github.com/berkiadam/haproxy-metrics/tree/master/test-app
The Kubernetes install files can be found at the root of the git repository.
Once installed, After installation use the app is available atapplication based on the following:
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in millisecundum>
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in milliseconds> / <http response code>
=Using HAproxy Metric Endpoint=
HAproxy has a built-in metric endpoint, which by default provides Prometheus-standard metrics (you can still CSV). Most of the metrics you get it provides are not really meaningful metricsusable. There are two metrics that can be extracted here, which are definitely to be observed worth using in prometheus. One of them counts the responses with 200 http code, broken down into backscatter and 200 response countersthe other counts the responses with 500.
The metric query endpoint (metrics) is turend on by default. This can be turned off, but HAProxy will still collect metrics from itin the background. The HAproxy pod is made up of two components. One is HAproxy itself and the other is the router-controller that manages the HAproxy configuration. Metrics are collected from both components every 5 seconds by the metric manager. Metrics include frontend Frontend and backend metrics are bot collected , grouped by separate services.
:[[File:ClipCapIt-190808-094455.PNG|600px]]
== Query Metrics ==
There are two ways to query metrics.
#username + password: Basic authentication calls the / metrics endpoint to query the metrics.# Defining query from Kubernetes using RBAC Rules for the appropriate serviceAccount: For machine processing (e.g. Prometheus) it is possible to enable RBAC rules for a given service account to query the metrics.
<br>
=== User + password based query authentication ===
For a user name query, the The default metric URL is:<Prepre>
http: // <user>: <password> @ <router_IP> <STATS_PORT> / metrics
</ Prepre>
The user, password, and port metrics are can be found in the in the service definition for the HAproxy router. To do this, you first need to find the router service:
<pre>
# kubectl get svc -n default
router ClusterIP 172.30.130.191 <none> 80/TCP,443/TCP,1936/TCP 4d
</pre>
You can see that it there is listening on an extra port listed upon the 80 and 433, that is the '' '1936' ', which is the port of the metrics endpoint of the metric.
Now, let's look at the service definition to get extract the user and pass:
<source lang="C++">
# kubectl get svc router -n default -o yaml
Depending on According to this, using the node's IP address, (minishfit IP), the URL for the metric metrics endpoint is: http: // admin: 4v9a7ucfMi@192.168.42.64: 1936 / metrics (This browser cannot be called because it is not You can't invoke this URL in web-browsers as they aren't familiar with this format, use curl for testing it)
<pre>
# curl admin:4v9a7ucfMi@192.168.42.64:1936/metrics
=== ServiceAccount based query authentication ===
It is possible to query the HAproxy metrics not only with basic authentication, but also with RBAC rules.
You need to create a '' 'ClusterRole' 'that allows you to initiate a query at the' '' routers / metrics ''' endpoint. This will be mapped to serviceAccount running prometheus later.
<br>
'''cr-prometheus-server-route.yaml'''
The second step is to create a '' 'ClusterRoleBinding' '' that binds the serviceAccount belonging to the prometheus Prometheus server with the phenite new role.
<br>
'''crb-prometheus-server-route.yaml'''
namespace: mynamespace
</source>
Let's create the two above new objects:
<pre>
# kubectl apply -f cr-prometheus-server-route.yaml
==Prometheus integration==
 Look for Lets examine the definition of '' 'Endpoint' '' for routersdefinition of the HAproxy router. This Based on that, we can create the Prometheus configuration that will be added to the prometheus configuration so you can search responsible for finding runtime all router the podsrunning HAproxy instances. We will look for an endpoint called have to find endpoints with the name '' 'router' '' and that have a port definition called '' '1936-tcp' '' through which we . Prometheus will query extract the port number for the HAproxy metrics via the default metric endpoint query form this port-definition (/ metrics).
<pre>
# kubectl get Endpoints router -n default -o yaml
<br>
<br>
In the Promethues configuration, you need to add a new '' 'target' 'in which to find the' Endpoint with 'named' 'kubernetes_sd_configs' router '' and that will look for endpoints with the name '' with 'router' 'kubernetes_sd_configs' and with the port ''. Port of '1936-tcp' ''.
<source lang="c++">
- job_name: 'openshift-router'
</source>
Update the '' '' 'ConfigMap' '' of your Prometheus configuration.
<pre>
# kubectl apply -f cm-prometheus-server-haproxy.yaml
</pre>
 
Let's see in the prometheus pod that the configuration has been reloaded:
<pre>
# kubectl describe pod prometheus-server-75c9d576c9-gjlcr -n mynamespace
Containers:
prometheus-server-configmap-reload:
...
prometheus-server:
</pre>
Let's look at the logs in the side card container running on in the Promethues pod (responsible for reloading the configuration). You should see that you have reloaded the configuration.
<pre>
# kubectl logs -c prometheus-server-configmap-reload prometheus-server-75c9d576c9-gjlcr -n mynamespace
</pre>
 The same thing is to be seen when looking at Let's check the promethues server container logPrometheus logs as well:
<pre>
# kubectl logs -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
If there were more routers in the cluster, they would all appear here as separate endpoints.
<br>
 
<br>
 
<br>
==Metric varietiestypes==
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-origin/glusterfs-review/architecture/networking/haproxy-router.html <br>
At first glance, there are two meaningful metrics in provided by the HAproxy's repertoire. These are:
<br>
=== haproxy_server_http_responses_total ===
Shows per backend It is a Prometheus counter, shows how many women responded with 200 and 500 with http status for replies a given servicegave per backend. There It is no pod based breakdown hereon service level only. Unfortunately, we do not receive information on http 300 and 400 errors. We will also get these from the access log
<br>
<br>
Let's generate a 200 answer using the test application. We need to see the counter grow of the 200 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/200
<pre>
haproxy_server_http_responses_total {code = "2xx", Job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
<br>
Let's generate a 500 answer response using the test applicationagain. We need to see This time, the counter grow of the 500 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/500
<pre>
haproxy_server_http_responses_total {code = "5xx" job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
=== haproxy_server_response_errors_total ===
Counter type
<pre>
haproxy_server_response_errors_total{instance="192.168.122.223:1936",job="openshift-router",namespace="mynamespace",pod="test-app-57574c8466-pvcsg",route="test-app-service",server="172.17.0.17:8080",service="test-app-service"}
<br>
=Metrikák gyűjtése logokbólCollecting metrics from logs=
==Overview==
The task is to process the access log of HAproxy with a log interpreter and generate Prometheus metrics that must be made are available to for Prometheus at through an endpointHTTP endpont. We will use the grok-exporter tool, which can do this in one personboth. It can read logs from a file or stdin and generate metrics from itbased on the logs. The grok-exporter will receive the logs from HAproxy via a packaged an rsyslog server. Rsyslog puts logs into a file from which grok-exporter will be able to read them. Grok-exporter converts logs into promethues metrics.
Necessary steps: * You need to create a docker image from grok-exporter that has rsyslogin the image. (The container must be able to run the rsyslog server as root, which requires extra openShfit configuration.)* The grok-exporter image must configuration will be run on placed in a OpenShfit with both the grok-exporter configuration configured in ConfigMap and the rsyslog workspace with must be an OpenSfhit OpenShift volume.* For grok-exporter deployment, you need We have to create a ClasterIP-type service that can perform load-balancing between grok-exporter pods.* Routers (HAproxy) should be configured to log write access logs in debug mode and send them to the resulting access log remote rsyslog server running next to port 514 of the grok-exporter service.* The rsyslog server running on in the grok-exporter pod puts will write the received HAproxy access logs into the file '' '/ var / log / messages'' '(emptyDir type volume) and sends it to' '' stdout '''.* Logs written to stdout will be collected picked up by the docdocker-log-driver as well and forwarded to the centralized log architecture.(log retention) * The grok-exporter program reads '' '/ var / log / messages' '', generates prometheus metrics from its HAproxy access-logs.* The configuration of promethues should Promethues have to be configured to use '' 'kubernetes_sd_configs' '' to directly invoke collect metric from the grok proxy -exporter pods to collect the metric, not to go through the service to bypass load-balancing, since everything pod needs to be queried.
<br>
* kvantilis
You can set any number of tags for the metric using the parsed log string elements. Grok-exporter is based on the implementation of '' 'logstash-grok' ', using patterns and functions defined in logstash.
Detailed documentation at: <br>
https://github.com/fstab/grok_exporter/blob/master/CONFIG.md<br>
<br>
The grok-exporter can handle read form three types of inputsinput sources:* '''file''': we will use stick to this, it will process the log written by rsyslog.* '''webhook''': This solution could also be used if we were using with logstash for the used as rsyslog server and then sending . Logstash can send the logs to the grok-exporter to the webhook with the logstash plugin "http-output" unto.* '''stdin''': With rsyslog, stdin could can also be used. This requires the use of the '' 'omprog' '' program. Omprog is able to pass on stdin to a program it , that reads from rsyslog socket. The program will be restarted by omprog if a stocket and passes it is no longer running. on through stdin: https://www.rsyslog.com/doc/v8-stable/configuration/modules/omprog.html
=== Alternative Solutions ===
'''Fluentd''' <br>
'''Fluentd''' also solves the problem. To do this, you We would need to use three fluentd plugins (I haven't tried this):
* fluent-plugin-rewrite-tag-filter
* fluent-plugin-prometheus
'''mtail''':<br>
The other alternative solution would be google's ''' mtail '' project', which is supposed to be a resource more smore efficient in processing logs than the grok engine.<br>
https://github.com/google/mtail
* global:
* input: Tells you where and how to retrieve logs. Can be stdin, file and webhook. We will use the file input.
* grok: Location of the grok patterns. The Docker image Pattern definition will have this be stored in / grok / patterns folder.* metrics: This is the most important part. Here you need to define the metrics and the associated regular expression (in the form of grok patterns)
* server: What port the server should listen to.
<br>
====Metrics====
Metrics must be defined by metric type. The four basic types of prometheus metrics are supported: '' 'Gauge, Counter, Histogram, Summary' '' (quantile)
Below the type you must specify:
* name: This will be the name of the metric
* help: This will be the help text for the metric.
* match: Describe the structure of the log string like a regular expression to which the metrics should fit. Here you can use pre-defined grok patterns:
** '' 'BASIC grok patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns** '' 'HAROXY patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/haproxy
* label: You can name the result groups. The name can be referenced in the label section, which will create a label whose value will be the parsed data.
<br>
==== match ====
In match, you have to write a regular expression from grok building cubes. It is assumed that each element is separated by a pause in the log. Each build cube has the shape '' '% {PATTERN NAME}' '' where PATTERN NAME must exist in a pattern collection. The most common type is '' '% {DATA}' '', which refers to an arbitrary data structure that does not contain a break. There are several patterns that are combined from multiple elementary patterns. If you want the regular expression described by the pattern to be a result group, you must name the patterns, for example:<Prepre>
% {DATA} this_is_the_name
</Pre>
The value of the field found by the pattern will then be included in the variable '' 'this_is_the_name' '', which can be referenced when defining the value of the metric or when producing the label.
<br>
==== labels ====
You can refer to patterns named in the labels section. This will give the value of the field parsed from the given log string to the defined label. For example, using '' '% {DATA: this_is_the_name}' '' pattern, you could write the following tag: <br><Prepre>
mylabel: '{{.this_is_the_name}}'
</Pre>
Then, if the field described by the% {DATA} pattern was 'myvalue', then the metric would be labeled with the following: '' '{mylabel = "myvalue"}' '' <br>
Let's look at an example: <br>
The following log line is given:
<Prepre>
7/30/2016 2:37:03 PM adam 1.5
</Pre>
user: '{{.user}}'
</source>
The metric will be named '' 'grok_example_lines_total' ''. The metrics will be:
<pre>
# HELP Example counter metric with labels.
<br>
==== Determine the value of a metric ====
For a counter-type metric, you do not need to determine the value of the metric, because it will count the number of matching logs found. In contrast, for all other types, you have to specify what is considered a value. This should be specified in the '' 'value' '' section, where a named grok pattern from the match section must be referenced in the same way as Go templates as defined in the tags. Eg the following two log lines are given:<Prepre>
7/30/2016 2:37:03 PM adam 1
7/30/2016 2:37:03 PM Adam 5
<br>
==== Functions ====
You can apply functions to the values ​​of the metric (values) and to the tags. Functions must be grok-exporter version '' '0.2.7' '' or later. String manipulation functions and arithmetic functions can also be used. The following two arguments arithmetic functions are supported:
* add
* subtract
</source>
Then the metric changes to:
<Prepre>
# HELP Example counter metric with labels.
# TYPE grok_example_lines histogram
<br>
Example haproxy access-log:
<Prepre>
Aug 6 20:53:30 192.168.122.223 haproxy [39]: 192.168.42.1:50708 [06 / Aug / 2019: 20: 53: 30.267] public be_edge_http: mynamespace: test-app-service / pod: test-app- 57574c8466-qbtg8: test-app-service: 172.17.0.12: 8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET / test / slowresponse / 1 HTTP / 1.1 "
</Pre>
In the config.yml file, we will define a histogram that contains the response time for full requests. This is a classic histogram, usually containing the following buckets (in seconds):
<Prepre>
[0.1, 0.2, 0.4, 1, 3, 8, 20, 60, 120]
</Pre>
Response time metrics by convention are called '' '<prefix> _http_request_duration_seconds' ''
'''config.yml'''
* '''port: 9144''' -> This port will provide the /metrics endpoint.
<br>
{{warning | do not forget to set the value of '' 'readall' 'to' '' false '' in a live environment as this will greatly reduce efficiency}}
<br>
<br>
<br>
The second problem is that they are based on an ubuntu base image, where it is very difficult to get rsyslog to log on to stdout, which requires the Kubernetets centralized log collector to receive HAproxy logs, so both monitoring and centralized logging can be served. Thousands of the original Dockerfile will be ported to '' 'centos 7' '' and will be supplemented with the installation of the rsyslog server.
<br>
All necessary files are available on git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>
I also created an ubuntu based solution, which is an extension of the original docker-hub solution, which can also be found on git-hub in the '' 'grok-exporter-ubuntu folder' ''. For the rest of the howot, we will always use the cent version.
<br>
<br>
=== Dockerfile ===
We will start with '' 'palobo / grok_exporter' '' Dockerfile, but will complement it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for Docker image build]]
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from there. Since we will be uploading the image to a remote (not local) docker repository, it is important to follow the naming conventions:
<Prepre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</ Prepre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placed.
<Prepre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
The resulting image can be tested by running a native, local docker. Create a haproxy test log file ('' 'haproxy.log' '') with the following content in it. This will be processed by the grok-exporter, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok file '' 'config.yml' '' created above in the same folder. In the config.yml file, change the input.path to '' '/grok/haproxy.log' '' so that the grok-exporter processes our test log file. Then start it with a '' 'docker run' 'command:<Prepre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
</Pre>
<br>
After starting, check in log that grok and rsyslog are actually started:
<Prepre>
# docker logs grok
  * Starting enhanced syslogd rsyslogd
<br>
<br>
As a second step, verify that the '' 'rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container and look for the /var/log/messages file:
<pre>
# docker exec -it grok /bin/bash
<br>
Most az anya gépről a Now, from the mother machine, use the '''logger''' paranccsal küldjünk egy command to send a log üzenetet a konténerben futó message to the container running rsyslog szervernek az server on port 514-es porton:
<pre>
# logger -n localhost -P 514 -T "this is the message"
</prePre>(T=TCP)
Ekkor a The log meg kell jelenjen a should then appear in the '''syslog''' fájlbanfile:
<pre>
Aug 8 16:54:25 dell adam this is the message</prePre>
Törölhetjük a lokális You can delete the local docker konténertcontainer.
<br>
<br>
===Távoli Remote build===Fel szeretnénk tölteni az elkészült We would like to upload the completed docker image-t a to the minishfit saját 's own registry-ébe. Ehhez az To do this, you need to build the image-t a with the minishfit VM lokális local docker démonjával kell build-elnidaemon, mert csak onnan ehet hozzáférni a since you can only access the minishfit registry-hezfrom there. <br>Részletek itt: Details at [[Openshift_basics#Minishfit_docker_registry|➲Image push a to minishift docker registriy-be]]
Ahhoz hogy az In order for the ''''admin''' user-nek legyen joga feltölteni az to have the right to upload the image-t a to the minisfhit registry-be a in the '''default''' névtérbe, ahol a namespace where the router is futrunning, szüksége hogy megkapja a you need to get the'''cluster-admin''' jogotrights. Fontos, hogy It is important to log in with '''-u system:admin''' -al lépjünk be ne csupán and not just''oc login'''-al, mert akkor nem lesz jogunk az as you will not have the right to issue the command'''oc adm''' parancsot kiadni. Ugyan így fogunk hivatkozni a In the same way, we will refer to the user-re is az in the '''--as''' paraméterbenparameter.
<pre>
# oc login -u system:admin# oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin
cluster role "cluster-admin" added: "admin"
</prePre>{{note|Ha ezt a hibát kapjuk If we get this error '''Error from server (NotFound): the server could not find the requested resource''', ez azt jelenti, hogy az it means that our client'''oc''' kliens programunk régebbi mint a is older than OpenShift verzióversion}}
Irányítsuk át a lokális Redirect our local docker kliensünket a client to the docker daemon running on the minisfhit VM-en futó docker démonra, majd jelentkezzünk be a and log into the minishift docker registry-be:
<pre>
# minishift docker-env
# eval $(minishift docker-env)
# oc login
Username: admin
Password: <admin>
# docker login -u admin -p $(oc whoami -t) $(minishift openshift registry)
Login Succeeded
</prePre>
Build-eljük le a it in the minishfit VM-en isas well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0 .</prePre>
Lépjünk be a Log in to the minisfhit docker registry-be majd adjuk ki a and type '''push''' parancsot.
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
</prePre>
<br>
==Kubernetes objektumokKubernet objects ==
A For grok-exporter-hez létre fogunk hozni egy we will create a serviceAccount-ot, egy a deployment-et, egy a service-t és egy and a comifMap-et ahol a where we will store the grok-exporter konfigurációját fogjuk tárolniconfiguration. Ezen felül módosítani fogjuk az anyuid nevű In addition, we will modify the object '''SecurityContextConstraints''' objektumotnamed anyuid, mivel az because the rsyslog szerver miatt a server requires the grok-exporter konténernek privilegizált módban kell futniacontainer to run in privileged mode.
* haproxy-exporter service account
* scc-anyuid.yaml
A teljes konfigurációt itt tölthetjük leThe full configuration can be downloaded here: [[File:Haproxy-kubernetes-objects.zip]], vagy megtalálható az alábbi or can be found in the git repository-banbelow: https://github.com/berkiadam/haproxy-metrics
<br>
<br>
===Create ServiceAccount létrehozása===A The haproxy-exporter-nek szüksége van egy saját needs its own serviceAccount-ra, amire engedélyezni fogjuk a privilegizált which we will allow to run the privileged (root) konténer futtatástcontainer. Erre az This is what the rsyslog szervernek van szükségeserver needs.
<pre>
# kubectl create serviceaccount haproxy-exporter -n default
serviceaccount/haproxy-exporter created</prePre>
Ennek As a hatáséra a következő result, the following serviceAccount definíció jött létredefinition was created:
<source lang="C++">
apiVersion: v1
<br>
A Because of the rsyslog server in grok-exporter-ben lévő rsyslog szerver miatt fontos, hogy a konténer privilegizált üzemmódban fussonit is important that the container runs in privileged mode. Ehhez az 'To do this, you need to add the serviceAcccount belonging to the haproxy-exporter to the SCC named ''anyuid''to enable running on behalf of the root. So you don' nevű SCC-be fel kell venni a haproxy-exporter-hez tartozó serviceAcccount-t, hogy engedélyezzük a root nevében futtatást. Tehát nincs szükség a need privileged SCC-re, mert a konténer elve because the container principle wants to start as root-ként szeretne indulni. Más különben az Otherwise, rsyslog nem lesz képes létrehozni a socket-eketwill not be able to create sockets. {{warning|Az SCC-k kezeléshez nem elég a Admin admin rolebindg for developer user mynamespace-re kapott admin rolebindg-jais not enough to handle SCCs. Ehhez You need to log in as an admin-ként kell bejelentkeznito do this: oc login -u system:admin}} 
<br><br>
Listázzuk ki a SCC-ketLets list the SCCs:
<pre>
# kubectl get SecurityContextConstraints
Az ''To 'anyuid''' SCC-hez a , users szekcióban kell hozzáadni a ''section'serviceAccount'''-ot az alábbi formábanmust be added in the following format:  - system:serviceaccount:<névtérnamespace>:<serviceAccount>
<br>
'''sccScc-anyuid.yaml'''<source lang="C++">
kind: SecurityContextConstraints
metadata:
name  name: anyuid
...
users: - system:serviceaccount:default:haproxy-exporter
...
</sourceSource>
Mivel ez már egy létező Since this is an existing '''scc''' és csak egy apró módosítást akarunk rajta eszközölniand we just want to make some minor changes to it, ezért helyben is szerkeszthetjükwe can edit it locally:
<pre>
# oc edit scc anyuid
<br>
===objektumok létrehozásacreate objects ===
<pre>
# kubectl apply -f cm-haproxy-exporter.yaml
configmap/haproxy-exporter created
</pre>
<pre>
# kubectl apply -f deployment-haproxy-exporter.yaml
deployment.apps/haproxy-exporter created
# kubectl rollout status deployment haproxy-exporter -n default
deployment "haproxy-exporter" successfully rolled out
</pre>
<br>
===Tesztelés===Testing
Keressük meg a Find the haproxy-exporter pod-ot majd nézzük meg a and look at the pod logjátlog:
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
 * Starting enhanced syslogd rsyslogd    ...done.Starting server on http://haproxy-exporter-744d84f5df-9fj9m:9144/metrics
</pre>
Majd lépjünk be a konténerbe és teszteljük re az Then enter the container and test the rsyslog működésétfunction:
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg /bin/bash -n default
</pre>
Majd a Then use the '''logger''' paranccsal küldjünk egy command to send a log üzenetet az message to rsyslog-nak.
<pre>
logger -n localhost -P 514 -T "this is the message"
</pre>
Most listázzuk ki a Now, let's list the contents of the /var/log/messages mappa tartalmátfolder:
<pre>
# cat messages
Aug 28 19:16:09 localhost root: this is the message
</pre>
Lépjünk ki a konténerből, és kérjük le megint a Exit the container and retrieve the pod logjait, hogy megnézzük, hogy az logs again to see if the log has been stdout-ra is kirakta e a logot:
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
Starting server on http://haproxy-exporter-647d7dfcdf-gbgrg:9144/metrics2019-08-28T19:16:09+00:00 localhost root: this is the message
</pre>
<br>
==HAproxy konfigurációConfiguration ==
===Környezeti változók beállításaSetting environment variables ===A For HAproxy-nak be fogjuk állítani környezeti változónk keresztül a , we will set the address of the rsyslog server running in the haporxy-exporter pod-ban futó rsyslog szerver címétvia our environment variable. Ehhez első lépésben listázzuk a To do this, we first list the haproxy-exporter service-t.
<pre>
# kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhaproxy-exporter-service ClusterIP 172.30.213.183 <none> 9144/TCP,514/TCP,514/UDP 15s
..
</pre>
A HAproxy az stores the rsyslog szerver címét a server address in the environment variable '''ROUTER_SYSLOG_ADDRESS''' nevű környezeti változóban tárolja (part of Deployment része). Ezt futásidőben át tudjuk írni az We can rewrite this at runtime with the command '''oc set env''' paranccsal. A változó átírása után a After rewriting the variable, the pod magától újra fog indulniwill restart automatically.
<pre>
# oc set env dc/myrouter ROUTER_SYSLOG_ADDRESS=172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note|Minishift-en a In minishift, the router konténerben nem működik a service-ek nevére a névfeloldáscontainer does not work with name resolution for services, mivel nem a because it is not the Kubernetes klaszter cluster DNS szerver címe van beállítva, hanem a server address but the minishfit VM. Ezért nem tehetünk mástTherefore, mint hogy a all you have to do is enter the service 's IP címét adjuk meg a neve helyettaddress instead of its name. In OpenShift környezetben a , we enter the name of the service nevét adjuk meg}}
Majd második lépésben állítsuk át Then, in the second step, change the log level in debug-ra a logszintet a to HAproxy-ban, mert csak because you only have access to the debug szinten van access-loglevel.
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning|Teljesítmény tesztel meg kell vizsgálni hogy mekkora plusz terhelést jelent Performance test to see how much extra load a haproxy-nak ha has when running in debug módban futmode}}
<br>
A fenti két környezeti változó módosításának a hatására As a result of modifying the above two environment variables, the configuration of HAproxy in the router konténerben a container in file '''/var/lib/haproxy/conf/haproxy.config''' fájlban a HAproxy konfigurációja az alábbira változotthas changed to:
<pre>
# kubectl exec -it myrouter-5-hf5cs /bin/bash -n default
$ cat /var/lib/haproxy/conf/haproxy.config
global
..
log   log 172.30.82.232 local1 debug
</pre>
A lényeg, hogy megjelent a log paraméternél a The important thing is that the haproxy-exporter service címe és a address and the log level '''debug''' have appeared in the log szintparameter.
<br>
<br>
<br>
===Testing rsyslog szerver teszteléseserver ===Generáljunk egy kis forgalmat a Generate some traffic through haproxy-n keresztül, majd lépjünk vissza a then go back to the haproxy-exporter konténerbe, és listázzuk a container and list the contents of the messages fájl tartalmátfile.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m /bin/bash -n default
#
# tail -f /var/log/messages
Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy openshift_default stopped (FE: 0 conns, BE: 1 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:dsp:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-alertmanager-jv69s stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-server-2z6zc stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:mynamespace:test-app-service stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:myproject:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[32]: 127.0.0.1:43720 [09/Aug/2019:12:52:17.361] public openshift_default/<NOSRV> 1/-1/-1/-1/0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP/1.1"
</pre>
Ha a logjait megnézzük a If you look at your logs for the haproxy-exporter pod-nak, ugyan ezt kell ássukyou have to dig this one out.
<pre>
...
Aug 9 12:57:21 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:20.636] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 1/0/12/428/440 200 135 - - --II 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:21.075] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 4334/0/0/3021/7354 200 135 - - --VN 2/2/0/1/0 0/0 "GET /test/slowresponse/3000 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:28.430] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 90/0/0/100/189 404 539 - - --VN 2/2/0/1/0 0/0 "GET /favicon.ico HTTP/1.1"Aug 9 12:57:35 192.168.122.223 haproxy[32]: 192.168.42.1:48268 [09/Aug/2019:12:57:20.648] public public/<NOSRV> -1/-1/-1/-1/15002 408 212 - - cR-- 2/2/0/0/0 0/0 "<BADREQ>"
</pre>
===Testing grok-exporter tesztelése===Kérjük le a Please download the grok-exporter metrikákat a metrics at http://<pod IP>:9144/metrics címen. Vagy a Either in the haproxy-exporter pod-ban with a localhost hívással, vagy bármelyik másik call or in any other pod-ban a using the haporxy-exporter pod IP címét felhasználvaaddress. Az alábbi példában a In the example below, I enter the test-app-ba lépek be. Látnunk kell a metrikák között a '''We need to see the haproxy_http_request_duration_seconds_bucket''' histogramothistogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 /bin/bash -n mynamespace$
$ curl http://172.30.213.183:9144/metrics
...
# HELP haproxy_http_request_duration_seconds The request durations of for the applications running in openshift openhift that have route defined.
# TYPE haproxy_http_request_duration_seconds histogram
haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.1"} 0haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.2"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.4"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="1"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="3"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="8"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="20"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="60"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="120"} 3haproxy_http_request_duration_seconds_bucket{haproxy={ "haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="+Inf"} 3haproxy_http_request_duration_seconds_sum{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 7.9830000000000005haproxy_http_request_duration_seconds_count{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 3
</pre>
<br>
==Prometheus beállításokSettings ==
===Statikus konfigurációStatic configuration ===<source lang="C++">     - job_name: grok-exporter scrape_interval      scrape_interval: 5s metrics_path      metrics_path: /metrics static_configs      static_configs:       - targets: ['grok-exporter-service.default:9144'] </sourceSource>
===Pod szintű adatgyűjtésLevel Data Collection ===
Azt szeretnénk, hogy a We want the haproxy-exporter podok skálázhatóak legyenekpods to be scalable. Ehhez az kell, hogy a This requires that the prometheus ne a does not retrieve metrics through the service-en keresztül kérje le a metrikát (mert akkor a because it does service loadbalancing-ot csinál) hanem közvetlenül a pod-okat szólítsa meg, but addresses the pods directly. Ehhez az kellTo do this, hogy a the prometheus a must get through the Kubernetes API-n keresztül kérje le a haproxy-epxporter-hez tartozó the '''Endpoint'''of the haproxy-otepxporter, ami tartalmazza a which contains the list of ip addresses for the service-hez tartozó podok ip címének a listáját's podcasts. Ehhez a prometheus We will use the '''kubernetes_sd_configs''' elemét fogjuk használnielement of prometheus. (Ennek előfeltétele, hogy a This requires that Prometheus képes legyen kommunikálni a be able to communicate with the Kubernetes API-val. Részleteket lásd itt: For details, see [[Prometheus_on_Kubernetes]])
A When using '''kubernetes_sd_configs''' használatakor mindig egy adott we always get a list of a specific Kubernetes objektum listát kérünk le a szerverről object from the server (node, service, endpoints, pod), majd a kapott listából megkeressük azt az erőforrást, amiből be akarjuk gyűjteni a metrikákatand then look up the resource from which we want to collect the metrics. Ezt úgy tesszük meg hogy a We do this by going to the ''''relabel_configs''' szekcióban majd szűrőfeltételeket írunk föl az adott section and then applying filter conditions to the tags of the given Kubernetes resource címkéire. Jelen esetben a In this case, we want to find the endpoint belonging to the haproxy-exporter-hez tartozó Endpoint-ot akarjuk megtalálni, mert az alapján a because it allows Prometheus meg tudja találni az összes a to find all the pods for the service-hez tartozó pod-ot. Tehát a címék alapján meg akarjuk majd találni egyrészt azt az So, based on the tags, we will want to find the endpoint-ot, amit that is called '''''haproxy-exporter-service'''-nak hívnak, ezen felül van egy and also has a '''metrics''' portja, amin keresztül a port through which Prometheus képes lekérni a metrikákatcan retrieve metrics. Az alapértelmezett The default URL a is '''/metrics''', tehát ezt külön nem kell definiálniso you don't have to define it separately, a it is used by grok-exporter is ezt használja.
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
kind: Endpoints
metadata:
name  name: haproxy-exporter-service
...
ports  ports:   - name: log-udp port    port: 514 protocol    protocol: UDP   - name: metrics port    port: 9144 protocol    protocol: TCP   - name: log-tcp port    port: 514 protocol    protocol: TCP
</pre>
Két címkét keresünk az We look for two tags in the Endpoints listábanlist:
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
A The config-map that describes proetheus.yaml-t, vagyis a that is, prometheus.yaml-t leíró config-map-et az alábbiakkal kell kiegészíteni, should be completed with the following: <source lang="C++">     - job_name: haproxy-exporter scheme      scheme: http tls_config      tls_config: ca_file        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server_name        server_name: router.default.svc bearer_token_file      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs      kubernetes_sd_configs:       - role: endpoints namespaces        namespaces: names          names:           - default relabel_configs      relabel_configs:       - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action        action: keep regex        regex: haproxy-exporter-service;metrics</sourceSource>
Töltsük újra a Reload configMap-et:
<pre>
# kubectl apply -f cm-prometheus-server-haproxy-full.yaml
</pre>
Majd várjuk meg, hogy a We will wait for Prometheus újra olvassa a konfigurációs fájltto read the configuration file again:
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
...
level=info ts=2019-07-22T20:25:36.016Z caller=main.go:730 msg="Loading configuration file" filename=/etc/config/prometheus.yml
</pre>
<br>
Majd a Then, on the http://mon.192.168.42.185.nip.io/targets képernyőn ellenőrizzükscreen, hogy eléri e a verify that Prometheus a reaches the haproxy-exporter target-et: :[[File:ClipCapIt-190809-164445.PNG]]
<br>
===haproxy-exporter skálázásascaling ===
<pre>
# kubectl scale deployment haproxy-exporter --replicas=2 -n defaultdeployment.extensions/haproxy-exporter scaled
</pre>
<pre>
# kubectl get deployment haproxy-exporter -n default
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEhaproxy-exporter 2 2 2 2 3h
</pre>
<br>
:[[File:ClipCapIt-190809-174825.PNG]]
<br>
==Metrika fajtákMetric varieties ==
===haproxy_http_request_duration_seconds_bucket===haproxy_http_request_duration_seconds_bucket
type: histogram
<br>
===haproxy_http_request_duration_seconds_bucket_count===haproxy_http_request_duration_seconds_bucket_counttype: counter<br>Az összes darabszáma az adott histogramba eső request-ek számánakThe total number of requests is the number of requests in that histogram
<pre>
haproxy_http_request_duration_seconds_count{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"} 5
</pre>
<br>
<br>
===haproxy_http_request_duration_seconds_sum===haproxy_http_request_duration_seconds_sumtype: counter<br>A válaszidők idejének összege az adott hisztogrambanThe sum of the response times in a given histogram. Az előző példa alapján összesen 5 kérés jöttBased on the previous example, és there were a kiszolgálási idő összeadva total of 5 requests and kserving time added up to 13 s volt.
<pre>
haproxy_http_request_duration_seconds_sum{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"13663} 13.663
</pre>
<br>
=OpenShift router + rsyslog=
Starting with OpenShift 3.11-től kezdődően lehet olyan , it is possible to define a router-t definiálni, hogy az OpenShfit automatikusan elindít egy that will openShfit automatically launch a side car rsyslog konténert a container in the router pod-ban és be is állítja, hogy a and configure HAproxy egy socket-en keresztül (to send logs to the rsyslog server via an emptyDir volume) elküldje a logokat az rsyslog szervernek, ami az which writes them to stdout-ra írja azokat alapértelmezettenby default. Az The configuration of rsyslog konfigurációja egy is in a configMap-ban van.
:[[File:ClipCapIt-190810-164907.PNG]]
<br>
A You can create a router-t syslogserverrel a with syslogserver using the '''--extended-logging''' kapcsolóval hozhatjuk létre az switch with the command '''''oc adm router''' paranccsal.
<pre>
# oc adm router myrouter --extended-logging -n default
info: password for stats user admin has been set to O6S6Ao3wTX
--> Creating router myrouter ... configmap     configmap "rsyslog-config" created warning    warning: serviceaccounts "router" already exists clusterrolebinding    clusterrolebinding.authorization.openshift.io "router-myrouter-role" created deploymentconfig    deploymentconfig.apps.openshift.io "myrouter" created service     service "myrouter" created--> Success
</pre>
<br>
Kapcsoljuk be a Turn on the debug szintet a level in HAproxy-ban:
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
<br>
Két konténer van az új There are two containers in the new router pod-ban:
<pre>
# kubectl describe pod/myrouter-2-bps5v -n default
..
Containers:
router  router: Image    Image: openshift/origin-haproxy-router:v3.11.0 Mounts    Mounts:       /var/lib/rsyslog from rsyslog-socket (rw)
...
syslog  syslog: Image    Image: openshift/origin-haproxy-router:v3.11.0 Mounts    Mounts:       /etc/rsyslog from rsyslog-config (rw)       /var/lib/rsyslog from rsyslog-socket (rw)
...
rsyslog  rsyslog-config: Type    Type: ConfigMap (a volume populated by a ConfigMap) Name    Name: rsyslog-config Optional    Optional: false rsyslog  rsyslog-socket: Type    Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium    Medium: SizeLimit    SizeLimit: <unset>
</pre>
Láthatjuk, hogy mind két konténerbe mount-olva van You can see that the '''/var/lib/rsyslog/''' mappafolder is mounted in both containers. A HAproxy konfigurációs fájljában ide fogja létrehozni az You will create the rsyslog.sock fájltfile here in your HAproxy configuration file.
<br>
<br>
===router konténercontainer ===Ha belépünk a When we enter the router konténerbecontainer, láthatjuk, hogy már fel is nyalta a konfigurációwe can see that the configuration has already been licked:
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c routerbash-4.2$ cat /var/lib/haproxy/conf/haproxy.config
global
...
log   log /var/lib/rsyslog/rsyslog.sock local1 debug
...
defaults
...
option   option httplog --> Enable logging of HTTP request, session state and timers
...
backend be_edge_http:mynamespace:test-app-service
</pre>
<br>
<br>
===rsyslog konténercontainer ===
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c syslog
$ cat /etc/rsyslog/rsyslog.conf $ModLoad imuxsock$SystemLogSocketName /var/lib/rsyslog/rsyslog.sock$ModLoad omstdout.so*.* :omstdout:
</pre>
<br>
Ha át akarjuk konfigurálni az If you want to reconfigure rsyslog-ot hogy küldje el a logokat pl a to send logs to eg logstash-nek, akkor csak a then you only need to rewrite configMap-et kell átírni. Alapértelmezetten csak az By default, it only writes to stdout-ra írja amit kapwhat you get.
<pre>
# kubectl get cm rsyslog-config -n default -o yaml
apiVersion: v1
data:
rsyslog  rsyslog.conf: |     $ModLoad imuxsock     $SystemLogSocketName /var/lib/rsyslog/rsyslog.sock     $ModLoad omstdout.so     *.* :omstdout:
kind: ConfigMap
metadata:
name  name: rsyslog-config namespace  namespace: default
</pre>
<br>
<br>
===Viewing HAproxy logok nézegetéseLogs ===
<pre>
# kubectl logs -f myrouter-2-bps5v -c syslog
</pre>