7,540
edits
Changes
no edit summary
<br>
=== Http test application ===
The Kubernetes install files can be found at the root of the git repository.
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in millisecundum>
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in milliseconds> / <http response code>
=Using HAproxy Metric Endpoint=
HAproxy has a built-in metric endpoint, which by default provides Prometheus-standard metrics (you can still CSV). Most of the metrics you get it provides are not really meaningful metricsusable. There are two metrics that can be extracted here, which are definitely to be observed worth using in prometheus. One of them counts the responses with 200 http code, broken down into backscatter and 200 response countersthe other counts the responses with 500.
The metric query endpoint (metrics) is turend on by default. This can be turned off, but HAProxy will still collect metrics from itin the background. The HAproxy pod is made up of two components. One is HAproxy itself and the other is the router-controller that manages the HAproxy configuration. Metrics are collected from both components every 5 seconds by the metric manager. Metrics include frontend Frontend and backend metrics are bot collected , grouped by separate services.
:[[File:ClipCapIt-190808-094455.PNG|600px]]
== Query Metrics ==
There are two ways to query metrics.
#username + password: Basic authentication calls the / metrics endpoint to query the metrics.# Defining query from Kubernetes using RBAC Rules for the appropriate serviceAccount: For machine processing (e.g. Prometheus) it is possible to enable RBAC rules for a given service account to query the metrics.
<br>
=== User + password based query authentication ===
http: // <user>: <password> @ <router_IP> <STATS_PORT> / metrics
</ Prepre>
The user, password, and port metrics are can be found in the in the service definition for the HAproxy router. To do this, you first need to find the router service:
<pre>
# kubectl get svc -n default
router ClusterIP 172.30.130.191 <none> 80/TCP,443/TCP,1936/TCP 4d
</pre>
You can see that it there is listening on an extra port listed upon the 80 and 433, that is the '' '1936' ', which is the port of the metrics endpoint of the metric.
Now, let's look at the service definition to get extract the user and pass:
<source lang="C++">
# kubectl get svc router -n default -o yaml
<pre>
# curl admin:4v9a7ucfMi@192.168.42.64:1936/metrics
=== ServiceAccount based query authentication ===
It is possible to query the HAproxy metrics not only with basic authentication, but also with RBAC rules.
You need to create a '' 'ClusterRole' 'that allows you to initiate a query at the' '' routers / metrics ''' endpoint. This will be mapped to serviceAccount running prometheus later.
<br>
'''cr-prometheus-server-route.yaml'''
The second step is to create a '' 'ClusterRoleBinding' '' that binds the serviceAccount belonging to the prometheus Prometheus server with the phenite new role.
<br>
'''crb-prometheus-server-route.yaml'''
namespace: mynamespace
</source>
Let's create the two above new objects:
<pre>
# kubectl apply -f cr-prometheus-server-route.yaml
==Prometheus integration==
<pre>
# kubectl get Endpoints router -n default -o yaml
<br>
<br>
In the Promethues configuration, you need to add a new '' 'target' 'in which to find the' Endpoint with 'named' 'kubernetes_sd_configs' router '' and that will look for endpoints with the name '' with 'router' 'kubernetes_sd_configs' and with the port ''. Port of '1936-tcp' ''.
<source lang="c++">
- job_name: 'openshift-router'
</source>
Update the '' '' 'ConfigMap' '' of your Prometheus configuration.
<pre>
# kubectl apply -f cm-prometheus-server-haproxy.yaml
</pre>
Let's look at the logs in the side card container running on in the Promethues pod (responsible for reloading the configuration). You should see that you have reloaded the configuration.
<pre>
# kubectl logs -c prometheus-server-configmap-reload prometheus-server-75c9d576c9-gjlcr -n mynamespace
</pre>
<pre>
# kubectl logs -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
If there were more routers in the cluster, they would all appear here as separate endpoints.
<br>
<br>
<br>
==Metric varietiestypes==
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-origin/glusterfs-review/architecture/networking/haproxy-router.html <br>
At first glance, there are two meaningful metrics in provided by the HAproxy's repertoire. These are:
<br>
=== haproxy_server_http_responses_total ===
<br>
<br>
Let's generate a 200 answer using the test application. We need to see the counter grow of the 200 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/200
<pre>
haproxy_server_http_responses_total {code = "2xx", Job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
<br>
Let's generate a 500 answer response using the test applicationagain. We need to see This time, the counter grow of the 500 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/500
<pre>
haproxy_server_http_responses_total {code = "5xx" job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
=== haproxy_server_response_errors_total ===
Counter type
<pre>
haproxy_server_response_errors_total{instance="192.168.122.223:1936",job="openshift-router",namespace="mynamespace",pod="test-app-57574c8466-pvcsg",route="test-app-service",server="172.17.0.17:8080",service="test-app-service"}
<br>
=Metrikák gyűjtése logokbólCollecting metrics from logs=
==Overview==
The task is to process the access log of HAproxy with a log interpreter and generate Prometheus metrics that must be made are available to for Prometheus at through an endpointHTTP endpont. We will use the grok-exporter tool, which can do this in one personboth. It can read logs from a file or stdin and generate metrics from itbased on the logs. The grok-exporter will receive the logs from HAproxy via a packaged an rsyslog server. Rsyslog puts logs into a file from which grok-exporter will be able to read them. Grok-exporter converts logs into promethues metrics.
Necessary steps: * You need to create a docker image from grok-exporter that has rsyslogin the image. (The container must be able to run the rsyslog server as root, which requires extra openShfit configuration.)* The grok-exporter image must configuration will be run on placed in a OpenShfit with both the grok-exporter configuration configured in ConfigMap and the rsyslog workspace with must be an OpenSfhit OpenShift volume.* For grok-exporter deployment, you need We have to create a ClasterIP-type service that can perform load-balancing between grok-exporter pods.* Routers (HAproxy) should be configured to log write access logs in debug mode and send them to the resulting access log remote rsyslog server running next to port 514 of the grok-exporter service.* The rsyslog server running on in the grok-exporter pod puts will write the received HAproxy access logs into the file '' '/ var / log / messages'' '(emptyDir type volume) and sends it to' '' stdout '''.* Logs written to stdout will be collected picked up by the docdocker-log-driver as well and forwarded to the centralized log architecture.(log retention) * The grok-exporter program reads '' '/ var / log / messages' '', generates prometheus metrics from its HAproxy access-logs.* The configuration of promethues should Promethues have to be configured to use '' 'kubernetes_sd_configs' '' to directly invoke collect metric from the grok proxy -exporter pods to collect the metric, not to go through the service to bypass load-balancing, since everything pod needs to be queried.
<br>
* kvantilis
Detailed documentation at: <br>
https://github.com/fstab/grok_exporter/blob/master/CONFIG.md<br>
<br>
The grok-exporter can handle read form three types of inputsinput sources:* '''file''': we will use stick to this, it will process the log written by rsyslog.* '''webhook''': This solution could also be used if we were using with logstash for the used as rsyslog server and then sending . Logstash can send the logs to the grok-exporter to the webhook with the logstash plugin "http-output" unto.* '''stdin''': With rsyslog, stdin could can also be used. This requires the use of the '' 'omprog' '' program. Omprog is able to pass on stdin to a program it , that reads from rsyslog socket. The program will be restarted by omprog if a stocket and passes it is no longer running. on through stdin: https://www.rsyslog.com/doc/v8-stable/configuration/modules/omprog.html
=== Alternative Solutions ===
'''Fluentd''' <br>
* fluent-plugin-rewrite-tag-filter
* fluent-plugin-prometheus
'''mtail''':<br>
The other alternative solution would be google's ''' mtail '' project', which is supposed to be a resource more smore efficient in processing logs than the grok engine.<br>
https://github.com/google/mtail
* global:
* input: Tells you where and how to retrieve logs. Can be stdin, file and webhook. We will use the file input.
* grok: Location of the grok patterns. The Docker image Pattern definition will have this be stored in / grok / patterns folder.* metrics: This is the most important part. Here you need to define the metrics and the associated regular expression (in the form of grok patterns)
* server: What port the server should listen to.
<br>
====Metrics====
Metrics must be defined by metric type. The four basic types of prometheus metrics are supported: '' 'Gauge, Counter, Histogram, Summary' '' (quantile)
Below the type you must specify:
* name: This will be the name of the metric
* help: This will be the help text for the metric.
* match: Describe the structure of the log string like a regular expression to which the metrics should fit. Here you can use pre-defined grok patterns:
** '' 'BASIC grok patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns** '' 'HAROXY patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/haproxy
* label: You can name the result groups. The name can be referenced in the label section, which will create a label whose value will be the parsed data.
<br>
==== match ====
In match, you have to write a regular expression from grok building cubes. It is assumed that each element is separated by a pause in the log. Each build cube has the shape '' '% {PATTERN NAME}' '' where PATTERN NAME must exist in a pattern collection. The most common type is '' '% {DATA}' '', which refers to an arbitrary data structure that does not contain a break. There are several patterns that are combined from multiple elementary patterns. If you want the regular expression described by the pattern to be a result group, you must name the patterns, for example:<Prepre>
% {DATA} this_is_the_name
</Pre>
The value of the field found by the pattern will then be included in the variable '' 'this_is_the_name' '', which can be referenced when defining the value of the metric or when producing the label.
<br>
==== labels ====
You can refer to patterns named in the labels section. This will give the value of the field parsed from the given log string to the defined label. For example, using '' '% {DATA: this_is_the_name}' '' pattern, you could write the following tag: <br><Prepre>
mylabel: '{{.this_is_the_name}}'
</Pre>
Then, if the field described by the% {DATA} pattern was 'myvalue', then the metric would be labeled with the following: '' '{mylabel = "myvalue"}' '' <br>
Let's look at an example: <br>
The following log line is given:
<Prepre>
7/30/2016 2:37:03 PM adam 1.5
</Pre>
user: '{{.user}}'
</source>
The metric will be named '' 'grok_example_lines_total' ''. The metrics will be:
<pre>
# HELP Example counter metric with labels.
<br>
==== Determine the value of a metric ====
For a counter-type metric, you do not need to determine the value of the metric, because it will count the number of matching logs found. In contrast, for all other types, you have to specify what is considered a value. This should be specified in the '' 'value' '' section, where a named grok pattern from the match section must be referenced in the same way as Go templates as defined in the tags. Eg the following two log lines are given:<Prepre>
7/30/2016 2:37:03 PM adam 1
7/30/2016 2:37:03 PM Adam 5
<br>
==== Functions ====
You can apply functions to the values of the metric (values) and to the tags. Functions must be grok-exporter version '' '0.2.7' '' or later. String manipulation functions and arithmetic functions can also be used. The following two arguments arithmetic functions are supported:
* add
* subtract
</source>
Then the metric changes to:
<Prepre>
# HELP Example counter metric with labels.
# TYPE grok_example_lines histogram
<br>
Example haproxy access-log:
<Prepre>
Aug 6 20:53:30 192.168.122.223 haproxy [39]: 192.168.42.1:50708 [06 / Aug / 2019: 20: 53: 30.267] public be_edge_http: mynamespace: test-app-service / pod: test-app- 57574c8466-qbtg8: test-app-service: 172.17.0.12: 8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET / test / slowresponse / 1 HTTP / 1.1 "
</Pre>
In the config.yml file, we will define a histogram that contains the response time for full requests. This is a classic histogram, usually containing the following buckets (in seconds):
<Prepre>
[0.1, 0.2, 0.4, 1, 3, 8, 20, 60, 120]
</Pre>
Response time metrics by convention are called '' '<prefix> _http_request_duration_seconds' ''
'''config.yml'''
* '''port: 9144''' -> This port will provide the /metrics endpoint.
<br>
{{warning | do not forget to set the value of '' 'readall' 'to' '' false '' in a live environment as this will greatly reduce efficiency}}
<br>
<br>
<br>
The second problem is that they are based on an ubuntu base image, where it is very difficult to get rsyslog to log on to stdout, which requires the Kubernetets centralized log collector to receive HAproxy logs, so both monitoring and centralized logging can be served. Thousands of the original Dockerfile will be ported to '' 'centos 7' '' and will be supplemented with the installation of the rsyslog server.
<br>
All necessary files are available on git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>
I also created an ubuntu based solution, which is an extension of the original docker-hub solution, which can also be found on git-hub in the '' 'grok-exporter-ubuntu folder' ''. For the rest of the howot, we will always use the cent version.
<br>
<br>
=== Dockerfile ===
We will start with '' 'palobo / grok_exporter' '' Dockerfile, but will complement it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for Docker image build]]
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from there. Since we will be uploading the image to a remote (not local) docker repository, it is important to follow the naming conventions:
<Prepre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</ Prepre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placed.
<Prepre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
The resulting image can be tested by running a native, local docker. Create a haproxy test log file ('' 'haproxy.log' '') with the following content in it. This will be processed by the grok-exporter, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok file '' 'config.yml' '' created above in the same folder. In the config.yml file, change the input.path to '' '/grok/haproxy.log' '' so that the grok-exporter processes our test log file. Then start it with a '' 'docker run' 'command:<Prepre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
</Pre>
<br>
After starting, check in log that grok and rsyslog are actually started:
<Prepre>
# docker logs grok
* Starting enhanced syslogd rsyslogd
<br>
<br>
As a second step, verify that the '' 'rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container and look for the /var/log/messages file:
<pre>
# docker exec -it grok /bin/bash
<br>
<pre>
# logger -n localhost -P 514 -T "this is the message"
</prePre>(T=TCP)
<pre>
Aug 8 16:54:25 dell adam this is the message</prePre>
<br>
<br>
===Távoli Remote build===Fel szeretnénk tölteni az elkészült We would like to upload the completed docker image-t a to the minishfit saját 's own registry-ébe. Ehhez az To do this, you need to build the image-t a with the minishfit VM lokális local docker démonjával kell build-elnidaemon, mert csak onnan ehet hozzáférni a since you can only access the minishfit registry-hezfrom there. <br>Részletek itt: Details at [[Openshift_basics#Minishfit_docker_registry|➲Image push a to minishift docker registriy-be]]
<pre>
# oc login -u system:admin# oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin
cluster role "cluster-admin" added: "admin"
</prePre>{{note|Ha ezt a hibát kapjuk If we get this error '''Error from server (NotFound): the server could not find the requested resource''', ez azt jelenti, hogy az it means that our client'''oc''' kliens programunk régebbi mint a is older than OpenShift verzióversion}}
<pre>
# minishift docker-env
# eval $(minishift docker-env)
# oc login
Username: admin
Password: <admin>
# docker login -u admin -p $(oc whoami -t) $(minishift openshift registry)
Login Succeeded
</prePre>
Build-eljük le a it in the minishfit VM-en isas well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0 .</prePre>
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
</prePre>
<br>
==Kubernetes objektumokKubernet objects ==
* haproxy-exporter service account
* scc-anyuid.yaml
<br>
<br>
===Create ServiceAccount létrehozása===A The haproxy-exporter-nek szüksége van egy saját needs its own serviceAccount-ra, amire engedélyezni fogjuk a privilegizált which we will allow to run the privileged (root) konténer futtatástcontainer. Erre az This is what the rsyslog szervernek van szükségeserver needs.
<pre>
# kubectl create serviceaccount haproxy-exporter -n default
serviceaccount/haproxy-exporter created</prePre>
<source lang="C++">
apiVersion: v1
<br>
<br><br>
<pre>
# kubectl get SecurityContextConstraints
<br>
'''sccScc-anyuid.yaml'''<source lang="C++">
kind: SecurityContextConstraints
metadata:
...
users: - system:serviceaccount:default:haproxy-exporter
...
</sourceSource>
<pre>
# oc edit scc anyuid
<br>
===objektumok létrehozásacreate objects ===
<pre>
# kubectl apply -f cm-haproxy-exporter.yaml
configmap/haproxy-exporter created
</pre>
<pre>
# kubectl apply -f deployment-haproxy-exporter.yaml
deployment.apps/haproxy-exporter created
# kubectl rollout status deployment haproxy-exporter -n default
deployment "haproxy-exporter" successfully rolled out
</pre>
<br>
===Tesztelés===Testing
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
</pre>
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg /bin/bash -n default
</pre>
<pre>
logger -n localhost -P 514 -T "this is the message"
</pre>
<pre>
# cat messages
Aug 28 19:16:09 localhost root: this is the message
</pre>
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
Starting server on http://haproxy-exporter-647d7dfcdf-gbgrg:9144/metrics2019-08-28T19:16:09+00:00 localhost root: this is the message
</pre>
<br>
==HAproxy konfigurációConfiguration ==
===Környezeti változók beállításaSetting environment variables ===A For HAproxy-nak be fogjuk állítani környezeti változónk keresztül a , we will set the address of the rsyslog server running in the haporxy-exporter pod-ban futó rsyslog szerver címétvia our environment variable. Ehhez első lépésben listázzuk a To do this, we first list the haproxy-exporter service-t.
<pre>
# kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhaproxy-exporter-service ClusterIP 172.30.213.183 <none> 9144/TCP,514/TCP,514/UDP 15s
..
</pre>
<pre>
# oc set env dc/myrouter ROUTER_SYSLOG_ADDRESS=172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note|Minishift-en a In minishift, the router konténerben nem működik a service-ek nevére a névfeloldáscontainer does not work with name resolution for services, mivel nem a because it is not the Kubernetes klaszter cluster DNS szerver címe van beállítva, hanem a server address but the minishfit VM. Ezért nem tehetünk mástTherefore, mint hogy a all you have to do is enter the service 's IP címét adjuk meg a neve helyettaddress instead of its name. In OpenShift környezetben a , we enter the name of the service nevét adjuk meg}}
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning|Teljesítmény tesztel meg kell vizsgálni hogy mekkora plusz terhelést jelent Performance test to see how much extra load a haproxy-nak ha has when running in debug módban futmode}}
<br>
<pre>
# kubectl exec -it myrouter-5-hf5cs /bin/bash -n default
$ cat /var/lib/haproxy/conf/haproxy.config
global
..
</pre>
<br>
<br>
<br>
===Testing rsyslog szerver teszteléseserver ===Generáljunk egy kis forgalmat a Generate some traffic through haproxy-n keresztül, majd lépjünk vissza a then go back to the haproxy-exporter konténerbe, és listázzuk a container and list the contents of the messages fájl tartalmátfile.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m /bin/bash -n default
#
# tail -f /var/log/messages
Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy openshift_default stopped (FE: 0 conns, BE: 1 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:dsp:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-alertmanager-jv69s stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-server-2z6zc stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:mynamespace:test-app-service stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:myproject:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[32]: 127.0.0.1:43720 [09/Aug/2019:12:52:17.361] public openshift_default/<NOSRV> 1/-1/-1/-1/0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP/1.1"
</pre>
<pre>
...
Aug 9 12:57:21 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:20.636] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 1/0/12/428/440 200 135 - - --II 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:21.075] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 4334/0/0/3021/7354 200 135 - - --VN 2/2/0/1/0 0/0 "GET /test/slowresponse/3000 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:28.430] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 90/0/0/100/189 404 539 - - --VN 2/2/0/1/0 0/0 "GET /favicon.ico HTTP/1.1"Aug 9 12:57:35 192.168.122.223 haproxy[32]: 192.168.42.1:48268 [09/Aug/2019:12:57:20.648] public public/<NOSRV> -1/-1/-1/-1/15002 408 212 - - cR-- 2/2/0/0/0 0/0 "<BADREQ>"
</pre>
===Testing grok-exporter tesztelése===Kérjük le a Please download the grok-exporter metrikákat a metrics at http://<pod IP>:9144/metrics címen. Vagy a Either in the haproxy-exporter pod-ban with a localhost hívással, vagy bármelyik másik call or in any other pod-ban a using the haporxy-exporter pod IP címét felhasználvaaddress. Az alábbi példában a In the example below, I enter the test-app-ba lépek be. Látnunk kell a metrikák között a '''We need to see the haproxy_http_request_duration_seconds_bucket''' histogramothistogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 /bin/bash -n mynamespace$
$ curl http://172.30.213.183:9144/metrics
...
# HELP haproxy_http_request_duration_seconds The request durations of for the applications running in openshift openhift that have route defined.
# TYPE haproxy_http_request_duration_seconds histogram
haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.1"} 0haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.2"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.4"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="1"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="3"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="8"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="20"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="60"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="120"} 3haproxy_http_request_duration_seconds_bucket{haproxy={ "haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="+Inf"} 3haproxy_http_request_duration_seconds_sum{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 7.9830000000000005haproxy_http_request_duration_seconds_count{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 3
</pre>
<br>
==Prometheus beállításokSettings ==
===Statikus konfigurációStatic configuration ===<source lang="C++"> - job_name: grok-exporter scrape_interval scrape_interval: 5s metrics_path metrics_path: /metrics static_configs static_configs: - targets: ['grok-exporter-service.default:9144'] </sourceSource>
===Pod szintű adatgyűjtésLevel Data Collection ===
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
kind: Endpoints
metadata:
...
</pre>
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
<pre>
# kubectl apply -f cm-prometheus-server-haproxy-full.yaml
</pre>
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
...
level=info ts=2019-07-22T20:25:36.016Z caller=main.go:730 msg="Loading configuration file" filename=/etc/config/prometheus.yml
</pre>
<br>
<br>
===haproxy-exporter skálázásascaling ===
<pre>
# kubectl scale deployment haproxy-exporter --replicas=2 -n defaultdeployment.extensions/haproxy-exporter scaled
</pre>
<pre>
# kubectl get deployment haproxy-exporter -n default
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEhaproxy-exporter 2 2 2 2 3h
</pre>
<br>
<br>
==Metrika fajtákMetric varieties ==
===haproxy_http_request_duration_seconds_bucket===haproxy_http_request_duration_seconds_bucket
type: histogram
<br>
===haproxy_http_request_duration_seconds_bucket_count===haproxy_http_request_duration_seconds_bucket_counttype: counter<br>Az összes darabszáma az adott histogramba eső request-ek számánakThe total number of requests is the number of requests in that histogram
<pre>
haproxy_http_request_duration_seconds_count{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"} 5
</pre>
<br>
<br>
===haproxy_http_request_duration_seconds_sum===haproxy_http_request_duration_seconds_sumtype: counter<br>A válaszidők idejének összege az adott hisztogrambanThe sum of the response times in a given histogram. Az előző példa alapján összesen 5 kérés jöttBased on the previous example, és there were a kiszolgálási idő összeadva total of 5 requests and kserving time added up to 13 s volt.
<pre>
haproxy_http_request_duration_seconds_sum{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"13663} 13.663
</pre>
<br>
=OpenShift router + rsyslog=
Starting with OpenShift 3.11-től kezdődően lehet olyan , it is possible to define a router-t definiálni, hogy az OpenShfit automatikusan elindít egy that will openShfit automatically launch a side car rsyslog konténert a container in the router pod-ban és be is állítja, hogy a and configure HAproxy egy socket-en keresztül (to send logs to the rsyslog server via an emptyDir volume) elküldje a logokat az rsyslog szervernek, ami az which writes them to stdout-ra írja azokat alapértelmezettenby default. Az The configuration of rsyslog konfigurációja egy is in a configMap-ban van.
<br>
<pre>
# oc adm router myrouter --extended-logging -n default
info: password for stats user admin has been set to O6S6Ao3wTX
</pre>
<br>
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
<br>
<pre>
# kubectl describe pod/myrouter-2-bps5v -n default
..
Containers:
...
...
</pre>
<br>
<br>
===router konténercontainer ===Ha belépünk a When we enter the router konténerbecontainer, láthatjuk, hogy már fel is nyalta a konfigurációwe can see that the configuration has already been licked:
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c routerbash-4.2$ cat /var/lib/haproxy/conf/haproxy.config
global
...
...
defaults
...
...
backend be_edge_http:mynamespace:test-app-service
</pre>
<br>
<br>
===rsyslog konténercontainer ===
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c syslog
$ cat /etc/rsyslog/rsyslog.conf $ModLoad imuxsock$SystemLogSocketName /var/lib/rsyslog/rsyslog.sock$ModLoad omstdout.so*.* :omstdout:
</pre>
<br>
<pre>
# kubectl get cm rsyslog-config -n default -o yaml
apiVersion: v1
data:
kind: ConfigMap
metadata:
</pre>
<br>
<br>
===Viewing HAproxy logok nézegetéseLogs ===
<pre>
# kubectl logs -f myrouter-2-bps5v -c syslog
</pre>