7,540
edits
Changes
no edit summary
:[[File:ClipCapIt-190807-102633.PNG]]
<br>
=== Http test application ===
The Kubernetes install files can be found at the root of the git repository.
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in millisecundum>
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in milliseconds> / <http response code>
=Using HAproxy Metric Endpoint=
HAproxy has a built-in metric endpoint, which by default provides Prometheus-standard metrics (you can still CSV). Most , but most of the its metrics you get are not really meaningful metricsusable. There are two metrics metric types that can be extracted here, which are definitely to be observed in prometheusworth mentioning. One of them counts the responses with 200 http code, broken down into backscatter and 200 response countersthe other counts the responses with 500 (bad request).
The metric query endpoint (/metrics) is turned on by default. This can be turned off, but HAProxy will still collect metrics from itin the background. The HAproxy pod is made up of two components. One is HAproxy itself and the other is the router-controller that manages the HAproxy configuration. Metrics are collected from both components every 5 seconds by the metric manager. Metrics include frontend Frontend and backend metrics are both collected , grouped by separate services.
:[[File:ClipCapIt-190808-094455.PNG|600px]]
== Query Metrics ==
There are two ways to query metrics.
#Basic authentication with username + password: Basic authentication calls the / metrics http endpoint to query the metrics.# Defining Authentication with Kubernetes RBAC Rules for the appropriate serviceAccount: For machine processing (e.g. in Prometheus) it is possible to enable RBAC rules rule based authentication for a given service -account to query the metrics.
<br>
=== User + password based query authentication ===
The user, password, and port metrics are can be found in the in the service definition for the HAproxy router. To do this, you first need to find the router service:
<pre>
# kubectl get svc -n default
router ClusterIP 172.30.130.191 <none> 80/TCP,443/TCP,1936/TCP 4d
</pre>
You can see that it there is listening on an extra port listed upon the default 80 and 433, which is the '' '1936'' ', which that is the port of the metrics endpoint of the metric.
Now, let's look at examine the definition of the service definition to get extract the user username and passpassword:
<source lang="C++">
# kubectl get svc router -n default -o yaml
<pre>
# curl admin:4v9a7ucfMi@192.168.42.64:1936/metrics
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
=== ServiceAccount based query authentication ===
It is possible to query the HAproxy metrics not only with basic authentication, but also with RBAC rules.
<br>
'''cr-prometheus-server-route.yaml'''
The second step is to create a '' 'ClusterRoleBinding' '' that binds the Prometheus serviceAccount belonging to the prometheus with the phenite new role.
<br>
'''crb-prometheus-server-route.yaml'''
namespace: mynamespace
</source>
Let's create the two above new objects:
<pre>
# kubectl apply -f cr-prometheus-server-route.yaml
==Prometheus integration==
<pre>
# kubectl get Endpoints router -n default -o yaml
<br>
<br>
In the Promethues configuration, you need to add a new '' 'target' 'in which to find the' Endpoint with 'named' 'kubernetes_sd_configs' router '' and that will look for endpoints with the name '' with 'router' 'kubernetes_sd_configs' and with the port ''. Port of '1936-tcp' ''.
<source lang="c++">
- job_name: 'openshift-router'
</source>
Update the '' '' 'ConfigMap' '' of your Prometheus configuration.
<pre>
# kubectl apply -f cm-prometheus-server-haproxy.yaml
</pre>
Let's look at into the logs in of the side card container running on in the Promethues pod (responsible for reloading the configuration). You should see that you have reloaded the configuration.
<pre>
# kubectl logs -c prometheus-server-configmap-reload prometheus-server-75c9d576c9-gjlcr -n mynamespace
</pre>
<pre>
# kubectl logs -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
Next, open the Promethues console and navigate to the 'target ' page on the console: http://mon.192.168.42.185.nip.io/targets
[[File: ClipCapIt-190722-233253.PNG]]<br>
If there were more routers in the cluster, they would be all appear here listed as separate endpoints.
<br>
<br>
<br>
==Metric varietiestypes==
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-origin/glusterfs-review/architecture/networking/haproxy-router.html <br>
At first glance, there are two meaningful metrics in provided by the HAproxy's repertoire. These arethe following:
<br>
=== haproxy_server_http_responses_total ===
<br>
<br>
Let's generate a 200 answer using the test application. We need to see the counter grow of the 200 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/200
<pre>
haproxy_server_http_responses_total {code = "2xx", Job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
<br>
Let's generate a 500 answer response using the test applicationagain. We need to see This time, the counter grow of the 500 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/500
<pre>
haproxy_server_http_responses_total {code = "5xx" job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
=== haproxy_server_response_errors_total ===
Counter type
<pre>
haproxy_server_response_errors_total{instance="192.168.122.223:1936",job="openshift-router",namespace="mynamespace",pod="test-app-57574c8466-pvcsg",route="test-app-service",server="172.17.0.17:8080",service="test-app-service"}
<br>
=Metrikák gyűjtése logokbólCollecting metrics from the access logs=
==Overview==
The task is to process the access log of HAproxy with a log interpreter parser and generate Prometheus metrics that must be made are available to for Prometheus at through an HTTP endpoint. We will use the grok-exporter tool, which can do this in one personboth. It can read logs from a file or stdin and generate metrics from itbased on the logs. The grok-exporter will receive the logs from HAproxy via a packaged an rsyslog server. Rsyslog puts will put logs into a file from which grok-exporter will be able to read them. Grok-exporter converts logs into promethues metrics.
Necessary steps: * You need We have to create a docker image from grok-exporter that has rsyslogin the image. (The container must be able to run the rsyslog server as root, which requires extra openShfit configuration.)* The grok-exporter image must configuration will be run on in OpenShfit with both the grok-exporter configuration configured in ConfigMap and the rsyslog workspace with must be an OpenSfhit OpenShift volume.(writing a containers file system in runtime is really inefficient) * For grok-exporter deployment, you need We have to create a ClasterIP-type service that can perform load-balancing between grok-exporter pods.* Routers (The HAproxy) routers should be configured to log write access logs in debug mode and send them to the resulting access log remote rsyslog server running next to port 514 of the grok-exporter service.* The rsyslog server running on in the grok-exporter pod puts will both write the received HAproxy access logs into the file ('' '/ var / log / messages' '(' - emptyDir type volume) and sends it them to' '' stdout ''' as well for central log processing.* Logs written to stdout will be collected picked up by the docdocker-log-driver and forwarded to the centralized log architecture.(log retention) * The grok-exporter program reads '' '/ var / log / messages' '', and generates prometheus Prometheus metrics from its the HAproxy access-logs.* The configuration of promethues should Prometheus scrape config has to be configured to use extended with a '' 'kubernetes_sd_configs' '' to section. Prometheus must collect the metrics directly invoke from the grok proxy -exporter pods to collect the metric, not to go through the Kubernetes service to bypass load-balancing, since everything pod needs to be queried.
<br>
<br>
==introduction of grok-exporter introduction==Grok-exporter is a tool that can process logs based on regular expressions and convert them to produce one of the 4 basic types of prometheus Prometheus metrics:
* gauge
* counter
* kvantilis
Detailed documentation at: <br>
https://github.com/fstab/grok_exporter/blob/master/CONFIG.md<br>
<br>
The grok-exporter can handle read form three types of inputsinput sources:* '''file''': we will use stick to this, it will process the log written by rsyslog.* '''webhook''': This solution could also be used if we were using with logstash for the used as rsyslog server and then sending . Logstash can send the logs to the grok-exporter to the webhook with the logstash plugin "http-output" unto.* '''stdin''': With rsyslog, stdin could can also be used. This requires the use of the '' 'omprog' '' program. Omprog is able to , that can read data from sockets and pass on data through stdin to a program it reads from rsyslog socket. The program will be restarted by omprog if it is no longer running. : https://www.rsyslog.com/doc/v8-stable/configuration/modules/omprog.html
=== Alternative Solutions ===
'''Fluentd''' <br>
* fluent-plugin-rewrite-tag-filter
* fluent-plugin-prometheus
'''mtail''':<br>
The other alternative solution would be google's ''' mtail '' project', which is supposed said to be a resource more efficient in processing logs than the grok engine.<br>
https://github.com/google/mtail
* global:
* input: Tells you where and how to retrieve logs. Can be stdin, file and webhook. We will use the file input.* grok: Location of the grok patterns. The Docker image will have this Pattern definition are stored in / grok / patterns folderby default.* metrics: This is the most important part. Here you need to define the metrics and the associated regular expression (in the form of grok patterns)* server: What Contains the port of the http metrics server should listen to.
<br>
====Metrics====
Metrics must be defined by metric typetypes. The four basic types of prometheus Prometheus metrics are supported: '' 'Gauge, Counter, Histogram, Summary' '' (quantile)Below the type you must specify<br>Each definition contains 4 parts:
* name: This will be the name of the metric
* help: This will be is the help text for the metric.* match: Describe Describes the structure of the log string like in a regular expression to which the metrics should fitstyle format. Here you can use pre-defined grok patterns:** '' 'BASIC grok patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns** '' 'HAROXY patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/haproxy * label: You Here we can name add Prometheus labels to the result groups. The name can be referenced in the label section, which will create a label whose value will be the parsed datametrics.
<br>
==== match definition====Grok assumes that each element is separated by a single space in the source log files. In the matchsection, you have to write a regular expression from using grok building cubes. It is assumed that each element is separated by a pause in the logblocks. Each build cube building block has the shape format: '' '% {PATTERN NAMEPATTERN_NAME}' '' where PATTERN NAME PATTERN_NAME must exist in a be an existing predefined grok pattern collection. The most common type is '' '% {DATA}' '', which refers to an arbitrary data structure that does not contain a breakcontains no withe-space. There are several compound patterns that are combined build up from multiple elementary basic grok patterns. If you want We can assign the regular expression described by result groups to named variables that can be used as the value of the Prometheus metric or as label values. The variable name must be placed inside the curly bracket of the pattern to be separated by a result group, you must semicolon from the patter name the patterns, for example:<Prepre>% {DATA:this_is_the_name} this_is_the_name
</Pre>
The value result of the field found by the pattern regular expression will then be included in assigned to the variable '' 'this_is_the_name' '', which can be referenced when defining the value of the Prometheus metric or when producing the metrics label.
<br>
==== labels definition ====You In the label section we can refer to patterns named in define labels for the generated Prometheus metric. The labels section. This will give are defined with a name:value list, where the value of the field parsed from the given log can be a string to constant or a variable defined for a pattern in the defined labelmatch section. The variable must be referenced in go-template style between double curly brackets starting with a dot. For example, using if we used the '' '% {DATA: this_is_the_name}' '' patternin the match section, you could write we can define the 'mylabel' Prometheus label with the value of the 'this_is_the_name' variable in the following tagway: <br><Prepre>
mylabel: '{{.this_is_the_name}}'
</Pre>
The following log line is given:
<Prepre>
7/30/2016 2:37:03 PM adam 1.5
</Pre>
And there is given the following metric rule definition in the grok config:
<source lang="C++">
metrics:
user: '{{.user}}'
</source>
<pre>
# HELP Example counter metric with labels.
<br>
==== Determine Value of the value of a metric ====For a counter-type metric, you do not we don't need to determine the value of the metric, because as it will just simply count the number of matching logs foundmatches of the regular expression. In contrast, for all other types, you we have to specify what is considered a the value. This should It has be specified defined in the '' 'value' '' section, where a named grok pattern from of the match section must metric definition. Variables can be referenced in the same way as Go templates as defined we saw it in in the tagslabel definition chapter, in go-template style. Eg the Here is an example. The following two log lines are given:<Prepre>
7/30/2016 2:37:03 PM adam 1
7/30/2016 2:37:03 PM Adam 5
</Pre>
And for this we define the following histogram, which consists of two buckets, buckets bucket 1 and 2:
<source lang="C++">
metrics:
<br>
==== Functions ====
* add
* subtract
* multiply
* divide
<source lang = "C ++">
value: "{{multiply .val 1000}}"
</source>
# HELP Example counter metric with labels.
# TYPE grok_example_lines histogram
...
</Pre>
Since the two values will would change to 1000 and 5000 respectively, both will fall into the infinite categorybucket.
<br>
<br>
== Creating a the grok config file ==You need We have to compile a grok pattern that fits in the HAproxy access-log lines and can extract all the attributes that are important to usrequired for creating the response-latency-histogram based on the HAproxy access-logs. The required attributes are the following:
* total response time to respond
* haproxy instance id
* openshfit service namespace
<br>
Example haproxy access-log:
<Prepre>
Aug 6 20:53:30 192.168.122.223 haproxy [39]: 192.168.42.1:50708 [06 / Aug / 2019: 20: 53: 30.267] public be_edge_http: mynamespace: test-app-service / pod: test-app- 57574c8466-qbtg8: test-app-service: 172.17.0.12: 8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET / test / slowresponse / 1 HTTP / 1.1 "
</Pre>
In the config.yml file, we will define a histogram that contains the classic response -time for full requests. This is a classic -latency histogram, that usually containing contains the following buckets (in seconds):<Prepre>
[0.1, 0.2, 0.4, 1, 3, 8, 20, 60, 120]
</Pre>
Response time histogram metrics by convention are called : '' '<name prefix> _http_request_duration_seconds' ''
'''config.yml'''
</source>
Explanation:
* '''type:file''' -> read logs from file
* '''path: /var/log/messages''' -> The rsyslog server writes logs to /var/log/messages by default
* '''readall: true''' -> always reads the entire log file. This should only be used for testing, in a live environment, and should this always has to be set to false.* '''patterns_dir: ./patterns''' -> Pattern Base directory of the pattern definitions can be found in the docker image* <pre> value: "{{divide .Tt 1000}}" </pre> The serving response time in the HAproxy log is in milliseconds and must be converted so we convert it to seconds.* '''port: 9144''' -> This The http port will provide of the /metrics endpoint.
<br>
{{warning | do Do not forget to set the value of '' 'readall'' 'to' '' false '' in a live environment as this will greatly reduce efficiencyit can significantly degrade performance}}
<br>
<br>
:[[File:ClipCapIt-190808-170333.PNG]]
<br>
== making building the docker image ==The grok-exporter docker image is available on the docker hub in several versionsvariants. The only problem with them is that they do not include the rsyslog server, what we need is for the HAproxy to send logs directly to the grok-exporter podokankpod. <br>
docker-hub link: https://hub.docker.com/r/palobo/grok_exporter <br>
<br>
The second problem is that they are all based on an ubuntu base image, where that makes it is very difficult to get rsyslog to log on to stdout(ubunto doesn't support loggin to stdout), which requires required by the Kubernetets centralized log collector system. We are going to receive HAproxy logs, so both monitoring and centralized logging can be served. Thousands of port the original grok Dockerfile will be ported to '' 'centos 7' '' base image and will be supplemented with the add rsyslog installation of to the rsyslog servernew image.
<br>
All necessary files are available on under my git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>I also created an ubuntu based solution, which is an extension of the original docker-hub solutionversion, which can also be found on git-hub in the '' 'grok-exporter-ubuntu folder' ''. For In the rest of the howotthis chapter, we will always are going to use the cent centOS version.
<br>
<br>
=== Dockerfile ===
We will start with modify the official '' 'palobo / grok_exporter' '' Dockerfile, but we will complement extend it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for the build of the Docker image build]]
<br>
CMD sh -c "nohup /usr/sbin/rsyslogd -i ${PID_DIR}/pid -n &" && ./grok_exporter -config /grok/config.yml
</source>
{{note | It is important that we to use at least grok-exporter version 0.2.7 of grok-exporteror higher, the function handling first appearedas functions were introduced in this version}}
<br>
<br>
The '''rsyslog.conf''' file must be accompanied by include at least the following, which allows you to receive logos that enables receiving logs on port 514 on over both UDP and TCP (see zip above for details), and that write all . The logs are written to stdout and to /var/log/messages.
<pre>
$ModLoad omstdout.so
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this it directly on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from therethe VM. Since , at the and, as we will be uploading upload the image to a remote (not local) docker repository, it is important to follow the naming conventions:<Prepre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</ Prepre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placeddeployed.<Prepre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
The resulting image can be easily tested by running a native, local dockerlocally. Create a haproxy test log file ('' 'haproxy.log' '') with the following content in it. This will be processed by the grok-exporterduring the test, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok config file '' 'config.yml' '' created specified above in the same folder. In the config.yml file, change the input.path to '' '/grok/haproxy.log' '' so that where the grok-exporter processes our test log filecontent is. Then start it the container with a following '' 'docker run' 'command:<Prepre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
</Pre>
<br>
# docker logs grok
* Starting enhanced syslogd rsyslogd
<br>
Metrics are then available in the browser at http: // localhost: 9144 / metrics:
<pre>
...
<br>
<br>
As a second step, verify that the '' 'rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container with the exec command and look for check the content of the /var/log/messages file:in f (follow) mode.
<pre>
# docker exec -it grok /bin/bash
<br>
<pre>
# logger -n localhost -P 514 -T "this is the message"
</prePre>(T=TCP)
<pre>
Aug 8 16:54:25 dell adam this is the message</prePre>
<br>
<br>
===Távoli Remote build===Fel szeretnénk tölteni az elkészült docker We have to to upload our custom grok Docker image-t a to the minishfit saját registry-ébe. Ehhez az To do so, you need to build the image-t a with the minishfit VM lokális 's local docker démonjával kell build-elnidaemon, mert csak onnan ehet hozzáférni a since you can only access the minishfit registry-hezfrom the VM so uploading images is only possible from the VMs local registry. <br>Részletek ittDetails can be found here: [[Openshift_basics#Minishfit_docker_registry|➲Image push a to minishift docker registriy-be]]
<pre>
# oc login -u system:admin
# oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin
cluster role "cluster-admin" added: "admin"
</prePre>{{note|Ha ezt a hibát kapjuk If we get this error '''Error from server (NotFound): the server could not find the requested resource''', ez azt jelenti, hogy az '''it probably means that our oc''' kliens programunk régebbi mint a client is older than OpenShift verzióversion}}
<pre>
# minishift docker-env
# eval $(minishift docker-env)
# oc login
Username: admin
Password: <admin>
# docker login -u admin -p $(oc whoami -t) $(minishift openshift registry)
Login Succeeded
</prePre>
Build-eljük le a the image on the minishfit VM-en isas well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0 .</prePre>
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
</prePre>
<br>
==Required Kubernetes objektumokobjects ==
* haproxy-exporter service account
* scc-anyuid.yaml
<br>
<br>
===Create the ServiceAccount létrehozása===A haproxy-exporter-nek szüksége van egy saját serviceAccount-ra, amire engedélyezni fogjuk a privilegizált (root) konténer futtatást. Erre az rsyslog szervernek van szüksége.
<pre>
# kubectl create serviceaccount haproxy-exporter -n default
serviceaccount/haproxy-exporter created</prePre>
<source lang="C++">
apiVersion: v1
kind: ServiceAccount
metadata:
name: haproxy-exporter
namespace: default
secrets:
- name: haproxy-exporter-token-8svkx
===Objektumok definiálásaAddition Kubernetes objects===
<br>
<br>
<br><br>
<pre>
# kubectl get SecurityContextConstraints
<br>
<br>
'''sccScc-anyuid.yaml'''<source lang="C++">
kind: SecurityContextConstraints
metadata:
...
users: - system:serviceaccount:default:haproxy-exporter
...
</source>
<pre>
# oc edit scc anyuid
<br>
===objektumok létrehozásacreate the objects ===
<pre>
# kubectl apply -f cm-haproxy-exporter.yaml
configmap/haproxy-exporter created
</pre>
<pre>
# kubectl apply -f deployment-haproxy-exporter.yaml
deployment.apps/haproxy-exporter created
# kubectl rollout status deployment haproxy-exporter -n default
deployment "haproxy-exporter" successfully rolled out
</pre>
<br>
===TesztelésTesting ===
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
</pre>
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg /bin/bash -n default
</pre>
<pre>
logger -n localhost -P 514 -T "this is the message"
</pre>
<pre>
# cat messages
Aug 28 19:16:09 localhost root: this is the message
</pre>
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
Starting server on http://haproxy-exporter-647d7dfcdf-gbgrg:9144/metrics2019-08-28T19:16:09+00:00 localhost root: this is the message
</pre>
<br>
==HAproxy konfigurációConfiguration ==
===Környezeti változók beállításaSetting the environment variables ===A In the HAproxy-nak be fogjuk állítani környezeti változónk keresztül a routers, we will set the address of the rsyslog server running in the haporxy-exporter pod-ban futó rsyslog szerver címétvia environment variables. Ehhez első lépésben listázzuk a Let's check first the haproxy-exporter service-t.
<pre>
# kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhaproxy-exporter-service ClusterIP 172.30.213.183 <none> 9144/TCP,514/TCP,514/UDP 15s
..
</pre>
<pre>
# oc set env dc/myrouter ROUTER_SYSLOG_ADDRESS=172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note|Minishift-en a In minishift, in the router konténerben nem működik a containers the name resolution does not work for Kubernetes service-ek nevére a névfeloldásnames, mivel nem a because it doesn't use the Kubernetes klaszter cluster DNS szerver címe van beállítva, hanem a server but the minishfit VM. Ezért nem tehetünk mástTherefore, mint hogy a all you have to do is enter the service 's IP címét adjuk meg a neve helyettaddress instead of its name. In OpenShift környezetben a , we have to use the name of the service nevét adjuk meg}}
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning|Teljesítmény tesztel meg kell vizsgálni hogy mekkora plusz terhelést jelent a Performance test must be carried out to see how much is the extra load when running the haproxy-nak ha in debug módban futmode}}
<br>
<pre>
# kubectl exec -it myrouter-5-hf5cs /bin/bash -n default
$ cat /var/lib/haproxy/conf/haproxy.config
global
..
</pre>
<br>
<br>
<br>
===Testing the rsyslog szerver teszteléseserver ===Generáljunk egy kis forgalmat a Generate some traffic through haproxy-n keresztül, majd lépjünk vissza a then go back to the haproxy-exporter konténerbe, és listázzuk a container and list the content of the messages fájl tartalmátfile.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m /bin/bash -n default
#
# tail -f /var/log/messages
Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy openshift_default stopped (FE: 0 conns, BE: 1 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:dsp:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-alertmanager-jv69s stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-server-2z6zc stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:mynamespace:test-app-service stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:myproject:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[32]: 127.0.0.1:43720 [09/Aug/2019:12:52:17.361] public openshift_default/<NOSRV> 1/-1/-1/-1/0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP/1.1"
</pre>
<pre>
...
Aug 9 12:57:21 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:20.636] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 1/0/12/428/440 200 135 - - --II 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:21.075] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 4334/0/0/3021/7354 200 135 - - --VN 2/2/0/1/0 0/0 "GET /test/slowresponse/3000 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:28.430] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 90/0/0/100/189 404 539 - - --VN 2/2/0/1/0 0/0 "GET /favicon.ico HTTP/1.1"Aug 9 12:57:35 192.168.122.223 haproxy[32]: 192.168.42.1:48268 [09/Aug/2019:12:57:20.648] public public/<NOSRV> -1/-1/-1/-1/15002 408 212 - - cR-- 2/2/0/0/0 0/0 "<BADREQ>"
</pre>
===Testing the grok-exporter tesztelésecomponent ===Kérjük le a grokPlease open the gork-exporter metrikákat a metrics at http://<pod IP>:9144/metrics címen. Vagy a You can open this URL either in the haproxy-exporter pod-ban itself with on localhost hívással, vagy bármelyik másik or in any other pod-ban a using the haporxy-exporter pod 's IP címét felhasználvaaddress. Az alábbi példában a test-app-ba lépek be. Látnunk kell a metrikák között a We have to see the '''haproxy_http_request_duration_seconds_bucket''' histogramothistogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 /bin/bash -n mynamespace$
$ curl http://172.30.213.183:9144/metrics
...
# HELP haproxy_http_request_duration_seconds The request durations of for the applications running in openshift openhift that have route defined.
# TYPE haproxy_http_request_duration_seconds histogram
haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.1"} 0haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.2"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.4"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="1"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="3"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="8"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="20"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="60"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="120"} 3haproxy_http_request_duration_seconds_bucket{haproxy={ "haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="+Inf"} 3haproxy_http_request_duration_seconds_sum{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 7.9830000000000005haproxy_http_request_duration_seconds_count{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 3
</pre>
<br>
==Prometheus beállításokSettings ==
===Statikus konfigurációStatic configuration ===<source lang="C++"> - job_name: grok-exporter scrape_interval scrape_interval: 5s metrics_path metrics_path: /metrics static_configs static_configs: - targets: ['grok-exporter-service.default:9144']
</source>
=== Pod Level Data Collection ===
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
kind: Endpoints
metadata:
...
</pre>
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
<pre>
# kubectl apply -f cm-prometheus-server-haproxy-full.yaml
</pre>
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
...
level=info ts=2019-07-22T20:25:36.016Z caller=main.go:730 msg="Loading configuration file" filename=/etc/config/prometheus.yml
</pre>
<br>
<br>
===scaling haproxy-exporter skálázása===
<pre>
# kubectl scale deployment haproxy-exporter --replicas=2 -n defaultdeployment.extensions/haproxy-exporter scaled
</pre>
<pre>
# kubectl get deployment haproxy-exporter -n default
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEhaproxy-exporter 2 2 2 2 3h
</pre>
<br>
<br>
==Metrika fajtákMetric types ==
===haproxy_http_request_duration_seconds_bucket===
type: histogram
<br>
===haproxy_http_request_duration_seconds_bucket_count===type: counter<br>Az összes darabszáma az adott histogramba eső request-ek számánakThe total number of requests in that histogram
<pre>
haproxy_http_request_duration_seconds_count{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"} 5
</pre>
<br>
<br>
===haproxy_http_request_duration_seconds_sum===type: counter<br>A válaszidők idejének összege az adott hisztogrambanThe sum of the response times in a given histogram. Az előző példa alapján összesen 5 kérés jöttBased on the previous example, és there were a kiszolgálási idő összeadva total of 5 requests and the summ serving time was 13 s volt.
<pre>
haproxy_http_request_duration_seconds_sum{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"13663} 13.663
</pre>
<br>
=OpenShift router + rsyslog=
Starting with OpenShift 3.11-től kezdődően lehet olyan , it is possible to fire up a router-t definiálni, hogy az OpenShfit automatikusan elindít egy that will contain a side car rsyslog konténert a container in the router pod-ban és be is állítja, hogy a and configure HAproxy egy socket-en keresztül (to send logs to the rsyslog server via an emptyDir volume) elküldje a logokat az rsyslog szervernek, ami az which writes them to stdout-ra írja azokat alapértelmezettenby default. Az The configuration of rsyslog konfigurációja egy is in a configMap-ban van.
<br>
<pre>
# oc adm router myrouter --extended-logging -n default
info: password for stats user admin has been set to O6S6Ao3wTX
</pre>
<br>
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
<br>
<pre>
# kubectl describe pod/myrouter-2-bps5v -n default
..
Containers:
...
...
</pre>
<br>
<br>
===router konténercontainer ===Ha belépünk a When we enter the router konténerbecontainer, láthatjuk, hogy már fel is nyalta a konfigurációwe can see that the configuration has already been modified:
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c routerbash-4.2$ cat /var/lib/haproxy/conf/haproxy.config
global
...
...
defaults
...
...
backend be_edge_http:mynamespace:test-app-service
</pre>
<br>
<br>
===rsyslog konténercontainer ===
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c syslog
$ cat /etc/rsyslog/rsyslog.conf $ModLoad imuxsock$SystemLogSocketName /var/lib/rsyslog/rsyslog.sock$ModLoad omstdout.so*.* :omstdout:
</pre>
<br>
<pre>
# kubectl get cm rsyslog-config -n default -o yaml
apiVersion: v1
data:
kind: ConfigMap
metadata:
</pre>
<br>
<br>
===Viewing HAproxy logok nézegetéseLogs ===
<pre>
# kubectl logs -f myrouter-2-bps5v -c syslog
</pre>