Changes

Openshift - HAproxy metrics EN

717 bytes removed, 10:42, 25 January 2020
no edit summary
[[Openshift - HAproxy metrics|Openshift - HAproxy metrics HU]]
:[[File:ClipCapIt-190807-102633.PNG]]
<br>
=== Http test application ===
To generate the For generating http traffic, I made a test application that can generate arbitrary different response time and http response code requestscodes. Source available at here: https://github.com/berkiadam/haproxy-metrics/tree/master/test-app
The Kubernetes install files can be found at the root of the git repository.
Once installed, After installation use the app is available atapplication based on the following:
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in millisecundum>
* http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/ <delay in milliseconds> / <http response code>
=Using HAproxy Metric Endpoint=
HAproxy has a built-in metric endpoint, which by default provides Prometheus-standard metrics (you can still CSV). Most , but most of the its metrics you get are not really meaningful metricsusable. There are two metrics metric types that can be extracted here, which are definitely to be observed in prometheusworth mentioning. One of them counts the responses with 200 http code, broken down into backscatter and 200 response countersthe other counts the responses with 500 (bad request).
The metric query endpoint (/metrics) is turned on by default. This can be turned off, but HAProxy will still collect metrics from itin the background. The HAproxy pod is made up of two components. One is HAproxy itself and the other is the router-controller that manages the HAproxy configuration. Metrics are collected from both components every 5 seconds by the metric manager. Metrics include frontend Frontend and backend metrics are both collected , grouped by separate services.
:[[File:ClipCapIt-190808-094455.PNG|600px]]
== Query Metrics ==
There are two ways to query metrics.
#Basic authentication with username + password: Basic authentication calls the / metrics http endpoint to query the metrics.# Defining Authentication with Kubernetes RBAC Rules for the appropriate serviceAccount: For machine processing (e.g. in Prometheus) it is possible to enable RBAC rules rule based authentication for a given service -account to query the metrics.
<br>
=== User + password based query authentication ===
For a user name query, the The default metric metrics URL is:<Prepre>http: // <user>: <password> @ <router_IP> :<STATS_PORT> / metrics</ Prepre>
The user, password, and port metrics are can be found in the in the service definition for the HAproxy router. To do this, you first need to find the router service:
<pre>
# kubectl get svc -n default
router ClusterIP 172.30.130.191 <none> 80/TCP,443/TCP,1936/TCP 4d
</pre>
You can see that it there is listening on an extra port listed upon the default 80 and 433, which is the '' '1936'' ', which that is the port of the metrics endpoint of the metric.
Now, let's look at examine the definition of the service definition to get extract the user username and passpassword:
<source lang="C++">
# kubectl get svc router -n default -o yaml
Depending on According to this, the URL of the metrics endpoint using the node's IP address, (minishfit IPin the example), the URL for is the metric isfollowing: http: // admin: 4v9a7ucfMi@192.168.42.64: 1936 / metrics (This browser cannot be called because it is not You can't invoke this URL in web-browsers as they aren't familiar with this format, use curl for testing it in the command line)
<pre>
# curl admin:4v9a7ucfMi@192.168.42.64:1936/metrics
 
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
=== ServiceAccount based query authentication ===
It is possible to query the HAproxy metrics not only with basic authentication, but also with RBAC rules.
 You We need to create a '' 'ClusterRole'' 'that allows you the Prometheus service-account to initiate a query at the' '' routers / metrics ''' endpoint. This will be mapped to serviceAccount running prometheus later.
<br>
'''cr-prometheus-server-route.yaml'''
The second step is to create a '' 'ClusterRoleBinding' '' that binds the Prometheus serviceAccount belonging to the prometheus with the phenite new role.
<br>
'''crb-prometheus-server-route.yaml'''
namespace: mynamespace
</source>
Let's create the two above new objects:
<pre>
# kubectl apply -f cr-prometheus-server-route.yaml
==Prometheus integration==
 Look for Lets examine the definition of '' 'Endpoint' '' for routersdefinition of the HAproxy router. This Based on that, we can create the Prometheus configuration that will be added to the prometheus configuration so you can search responsible for finding runtime all router the podsrunning HAproxy instances. We will look for an have to find the OpenShift endpoint called object with the name '' 'router' '' and that have a port definition called '' '1936-tcp' '' through which we . Prometheus will query extract the port number for the HAproxy metrics via the default metric endpoint query form this port-definition (/ metrics).
<pre>
# kubectl get Endpoints router -n default -o yaml
<br>
<br>
In the Promethues configuration, you need to add a new '' 'target' 'in which to find the' Endpoint with 'named' 'kubernetes_sd_configs' router '' and that will look for endpoints with the name '' with 'router' 'kubernetes_sd_configs' and with the port ''. Port of '1936-tcp' ''.
<source lang="c++">
- job_name: 'openshift-router'
</source>
Update the '' '' 'ConfigMap' '' of your Prometheus configuration.
<pre>
# kubectl apply -f cm-prometheus-server-haproxy.yaml
</pre>
 
Let's see in the prometheus pod that the configuration has been reloaded:
<pre>
# kubectl describe pod prometheus-server-75c9d576c9-gjlcr -n mynamespace
Containers:
prometheus-server-configmap-reload:
...
prometheus-server:
</pre>
Let's look at into the logs in of the side card container running on in the Promethues pod (responsible for reloading the configuration). You should see that you have reloaded the configuration.
<pre>
# kubectl logs -c prometheus-server-configmap-reload prometheus-server-75c9d576c9-gjlcr -n mynamespace
</pre>
 The same thing is to be seen when looking at Lets check the promethues server container logPrometheus logs as well:
<pre>
# kubectl logs -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
Next, open the Promethues console and navigate to the 'target ' page on the console: http://mon.192.168.42.185.nip.io/targets
[[File: ClipCapIt-190722-233253.PNG]]<br>
If there were more routers in the cluster, they would be all appear here listed as separate endpoints.
<br>
 
<br>
 
<br>
==Metric varietiestypes==
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-origin/glusterfs-review/architecture/networking/haproxy-router.html <br>
At first glance, there are two meaningful metrics in provided by the HAproxy's repertoire. These arethe following:
<br>
=== haproxy_server_http_responses_total ===
Shows per backend It is a Prometheus counter, shows how many women responded with 200 and 500 with http status for replies a given servicegave per backend. There It is no pod based breakdown hereon service level only. Unfortunately, we do not receive information on http 300 and 400 errors. We will also get these from the access log
<br>
<br>
Let's generate a 200 answer using the test application. We need to see the counter grow of the 200 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/200
<pre>
haproxy_server_http_responses_total {code = "2xx", Job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
<br>
Let's generate a 500 answer response using the test applicationagain. We need to see This time, the counter grow of the 500 responses grows by one: http://test-app-service-mynamespace.192.168.42.185.nip.io/test/slowresponse/1/500
<pre>
haproxy_server_http_responses_total {code = "5xx" job = "openshift router" namespace = "mynamespace" pod = "body-app", route = "body-app-service" service = "body-app-service"} 1
=== haproxy_server_response_errors_total ===
Counter type
<pre>
haproxy_server_response_errors_total{instance="192.168.122.223:1936",job="openshift-router",namespace="mynamespace",pod="test-app-57574c8466-pvcsg",route="test-app-service",server="172.17.0.17:8080",service="test-app-service"}
<br>
=Metrikák gyűjtése logokbólCollecting metrics from the access logs=
==Overview==
The task is to process the access log of HAproxy with a log interpreter parser and generate Prometheus metrics that must be made are available to for Prometheus at through an HTTP endpoint. We will use the grok-exporter tool, which can do this in one personboth. It can read logs from a file or stdin and generate metrics from itbased on the logs. The grok-exporter will receive the logs from HAproxy via a packaged an rsyslog server. Rsyslog puts will put logs into a file from which grok-exporter will be able to read them. Grok-exporter converts logs into promethues metrics.
Necessary steps: * You need We have to create a docker image from grok-exporter that has rsyslogin the image. (The container must be able to run the rsyslog server as root, which requires extra openShfit configuration.)* The grok-exporter image must configuration will be run on in OpenShfit with both the grok-exporter configuration configured in ConfigMap and the rsyslog workspace with must be an OpenSfhit OpenShift volume.(writing a containers file system in runtime is really inefficient) * For grok-exporter deployment, you need We have to create a ClasterIP-type service that can perform load-balancing between grok-exporter pods.* Routers (The HAproxy) routers should be configured to log write access logs in debug mode and send them to the resulting access log remote rsyslog server running next to port 514 of the grok-exporter service.* The rsyslog server running on in the grok-exporter pod puts will both write the received HAproxy access logs into the file ('' '/ var / log / messages' '(' - emptyDir type volume) and sends it them to' '' stdout ''' as well for central log processing.* Logs written to stdout will be collected picked up by the docdocker-log-driver and forwarded to the centralized log architecture.(log retention) * The grok-exporter program reads '' '/ var / log / messages' '', and generates prometheus Prometheus metrics from its the HAproxy access-logs.* The configuration of promethues should Prometheus scrape config has to be configured to use extended with a '' 'kubernetes_sd_configs' '' to section. Prometheus must collect the metrics directly invoke from the grok proxy -exporter pods to collect the metric, not to go through the Kubernetes service to bypass load-balancing, since everything pod needs to be queried.
<br>
<br>
==introduction of grok-exporter introduction==Grok-exporter is a tool that can process logs based on regular expressions and convert them to produce one of the 4 basic types of prometheus Prometheus metrics:
* gauge
* counter
* kvantilis
You can set any number of tags for the metric using the parsed log string elements. Grok-exporter is based on the implementation of '' 'logstash-grok'' ', and grok-exporter is using patterns and functions defined in for logstash.
Detailed documentation at: <br>
https://github.com/fstab/grok_exporter/blob/master/CONFIG.md<br>
<br>
The grok-exporter can handle read form three types of inputsinput sources:* '''file''': we will use stick to this, it will process the log written by rsyslog.* '''webhook''': This solution could also be used if we were using with logstash for the used as rsyslog server and then sending . Logstash can send the logs to the grok-exporter to the webhook with the logstash plugin "http-output" unto.* '''stdin''': With rsyslog, stdin could can also be used. This requires the use of the '' 'omprog' '' program. Omprog is able to , that can read data from sockets and pass on data through stdin to a program it reads from rsyslog socket. The program will be restarted by omprog if it is no longer running. : https://www.rsyslog.com/doc/v8-stable/configuration/modules/omprog.html
=== Alternative Solutions ===
'''Fluentd''' <br>
'''Fluentd''' also solves To achieve the problem. To do thissame goal with fluentd, you we would need to use three fluentd plugins (I haven't tried this):
* fluent-plugin-rewrite-tag-filter
* fluent-plugin-prometheus
'''mtail''':<br>
The other alternative solution would be google's ''' mtail '' project', which is supposed said to be a resource more efficient in processing logs than the grok engine.<br>
https://github.com/google/mtail
* global:
* input: Tells you where and how to retrieve logs. Can be stdin, file and webhook. We will use the file input.* grok: Location of the grok patterns. The Docker image will have this Pattern definition are stored in / grok / patterns folderby default.* metrics: This is the most important part. Here you need to define the metrics and the associated regular expression (in the form of grok patterns)* server: What Contains the port of the http metrics server should listen to.
<br>
====Metrics====
Metrics must be defined by metric typetypes. The four basic types of prometheus Prometheus metrics are supported: '' 'Gauge, Counter, Histogram, Summary' '' (quantile)Below the type you must specify<br>Each definition contains 4 parts:
* name: This will be the name of the metric
* help: This will be is the help text for the metric.* match: Describe Describes the structure of the log string like in a regular expression to which the metrics should fitstyle format. Here you can use pre-defined grok patterns:** '' 'BASIC grok patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns** '' 'HAROXY patterns' '': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/haproxy * label: You Here we can name add Prometheus labels to the result groups. The name can be referenced in the label section, which will create a label whose value will be the parsed datametrics.
<br>
==== match definition====Grok assumes that each element is separated by a single space in the source log files. In the matchsection, you have to write a regular expression from using grok building cubes. It is assumed that each element is separated by a pause in the logblocks. Each build cube building block has the shape format: '' '% {PATTERN NAMEPATTERN_NAME}' '' where PATTERN NAME PATTERN_NAME must exist in a be an existing predefined grok pattern collection. The most common type is '' '% {DATA}' '', which refers to an arbitrary data structure that does not contain a breakcontains no withe-space. There are several compound patterns that are combined build up from multiple elementary basic grok patterns. If you want We can assign the regular expression described by result groups to named variables that can be used as the value of the Prometheus metric or as label values. The variable name must be placed inside the curly bracket of the pattern to be separated by a result group, you must semicolon from the patter name the patterns, for example:<Prepre>% {DATA:this_is_the_name} this_is_the_name
</Pre>
The value result of the field found by the pattern regular expression will then be included in assigned to the variable '' 'this_is_the_name' '', which can be referenced when defining the value of the Prometheus metric or when producing the metrics label.
<br>
 ==== labels definition ====You In the label section we can refer to patterns named in define labels for the generated Prometheus metric. The labels section. This will give are defined with a name:value list, where the value of the field parsed from the given log can be a string to constant or a variable defined for a pattern in the defined labelmatch section. The variable must be referenced in go-template style between double curly brackets starting with a dot. For example, using if we used the '' '% {DATA: this_is_the_name}' '' patternin the match section, you could write we can define the 'mylabel' Prometheus label with the value of the 'this_is_the_name' variable in the following tagway: <br><Prepre>
mylabel: '{{.this_is_the_name}}'
</Pre>
Then, if Lets assume that the field described by the% {DATA} pattern was 'this_is_the_name' variables value is 'myvalue', then . Then the metric would be labeled with receive the followinglabel: '' '{mylabel = "myvalue"}' '' <br>Let's look at an We are going to demonstrate a full, metric definition example: in the following section <br>
The following log line is given:
<Prepre>
7/30/2016 2:37:03 PM adam 1.5
</Pre>
And there is given the following metric rule definition in the grok config:
<source lang="C++">
metrics:
user: '{{.user}}'
</source>
The Here is the finale metric will be named '' 'grok_example_lines_total' ''. The provided by the grok-exporter metrics will beendpoint:
<pre>
# HELP Example counter metric with labels.
<br>
==== Determine Value of the value of a metric ====For a counter-type metric, you do not we don't need to determine the value of the metric, because as it will just simply count the number of matching logs foundmatches of the regular expression. In contrast, for all other types, you we have to specify what is considered a the value. This should It has be specified defined in the '' 'value' '' section, where a named grok pattern from of the match section must metric definition. Variables can be referenced in the same way as Go templates as defined we saw it in in the tagslabel definition chapter, in go-template style. Eg the Here is an example. The following two log lines are given:<Prepre>
7/30/2016 2:37:03 PM adam 1
7/30/2016 2:37:03 PM Adam 5
</Pre>
And for this we define the following histogram, which consists of two buckets, buckets bucket 1 and 2:
<source lang="C++">
metrics:
<br>
==== Functions ====
You can apply functions to the values ​​of the metric (values) and to the tags. Functions must be were introduced in grok-exporter version '' '0.2.7' '' or later. We can apply functions to metric value and to the value of its labels. String manipulation functions and arithmetic functions can are also be usedavailable. The following two arguments arithmetic functions are supported:
* add
* subtract
* multiply
* divide
The function has Functions have the following syntax: <pre> {{FUNCTION_NAME ATTR1 ATTR2}} </pre> where ATTR1 and ATTR2 can be either a value derived from a pattern natural number or a natural numbervariable name. The values ​​obtained from the pattern should be written in the same wayvariable name must start with a dot. Eg if we use Here is an example using the multiply function in on the the 'grok_example_lines' metric definition form the example above:
<source lang = "C ++">
          value: "{{multiply .val 1000}}"
</source>
Then the metric changes toThe outcome would be:<Prepre>
# HELP Example counter metric with labels.
# TYPE grok_example_lines histogram
...
</Pre>
Since the two values ​​will ​​would change to 1000 and 5000 respectively, both will fall into the infinite categorybucket.
<br>
<br>
== Creating a the grok config file ==You need We have to compile a grok pattern that fits in the HAproxy access-log lines and can extract all the attributes that are important to usrequired for creating the response-latency-histogram based on the HAproxy access-logs. The required attributes are the following:
* total response time to respond
* haproxy instance id
* openshfit service namespace
<br>
Example haproxy access-log:
<Prepre>
Aug 6 20:53:30 192.168.122.223 haproxy [39]: 192.168.42.1:50708 [06 / Aug / 2019: 20: 53: 30.267] public be_edge_http: mynamespace: test-app-service / pod: test-app- 57574c8466-qbtg8: test-app-service: 172.17.0.12: 8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET / test / slowresponse / 1 HTTP / 1.1 "
</Pre>
In the config.yml file, we will define a histogram that contains the classic response -time for full requests. This is a classic -latency histogram, that usually containing contains the following buckets (in seconds):<Prepre>
[0.1, 0.2, 0.4, 1, 3, 8, 20, 60, 120]
</Pre>
Response time histogram metrics by convention are called : '' '<name prefix> _http_request_duration_seconds' ''
'''config.yml'''
</source>
Explanation:
* '''type:file''' -> read logs from file
* '''path: /var/log/messages''' -> The rsyslog server writes logs to /var/log/messages by default
* '''readall: true''' -> always reads the entire log file. This should only be used for testing, in a live environment, and should this always has to be set to false.* '''patterns_dir: ./patterns''' -> Pattern Base directory of the pattern definitions can be found in the docker image* <pre> value: "{{divide .Tt 1000}}" </pre> The serving response time in the HAproxy log is in milliseconds and must be converted so we convert it to seconds.* '''port: 9144''' -> This The http port will provide of the /metrics endpoint.
<br>
{{warning | do Do not forget to set the value of '' 'readall'' 'to' '' false '' in a live environment as this will greatly reduce efficiencyit can significantly degrade performance}}
<br>
<br>
=== '''Online grok tester ===testers'''<br>There are several online grok testing tools. These can be used help to compile the required grok pattern expression very effectively. Try this: https://grokdebug.herokuapp.com/
:[[File:ClipCapIt-190808-170333.PNG]]
<br>
== making building the docker image ==The grok-exporter docker image is available on the docker hub in several versionsvariants. The only problem with them is that they do not include the rsyslog server, what we need is for the HAproxy to send logs directly to the grok-exporter podokankpod. <br>
docker-hub link: https://hub.docker.com/r/palobo/grok_exporter <br>
<br>
The second problem is that they are all based on an ubuntu base image, where that makes it is very difficult to get rsyslog to log on to stdout(ubunto doesn't support loggin to stdout), which requires required by the Kubernetets centralized log collector system. We are going to receive HAproxy logs, so both monitoring and centralized logging can be served. Thousands of port the original grok Dockerfile will be ported to '' 'centos 7' '' base image and will be supplemented with the add rsyslog installation of to the rsyslog servernew image.
<br>
All necessary files are available on under my git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>I also created an ubuntu based solution, which is an extension of the original docker-hub solutionversion, which can also be found on git-hub in the '' 'grok-exporter-ubuntu folder' ''. For In the rest of the howotthis chapter, we will always are going to use the cent centOS version.
<br>
<br>
=== Dockerfile ===
We will start with modify the official '' 'palobo / grok_exporter' '' Dockerfile, but we will complement extend it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for the build of the Docker image build]]
<br>
CMD sh -c "nohup /usr/sbin/rsyslogd -i ${PID_DIR}/pid -n &" && ./grok_exporter -config /grok/config.yml
</source>
{{note | It is important that we to use at least grok-exporter version 0.2.7 of grok-exporteror higher, the function handling first appearedas functions were introduced in this version}}
<br>
<br>
The '''rsyslog.conf''' file must be accompanied by include at least the following, which allows you to receive logos that enables receiving logs on port 514 on over both UDP and TCP (see zip above for details), and that write all . The logs are written to stdout and to /var/log/messages.
<pre>
$ModLoad omstdout.so
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this it directly on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from therethe VM. Since , at the and, as we will be uploading upload the image to a remote (not local) docker repository, it is important to follow the naming conventions:<Prepre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</ Prepre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placeddeployed.<Prepre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
The resulting image can be easily tested by running a native, local dockerlocally. Create a haproxy test log file ('' 'haproxy.log' '') with the following content in it. This will be processed by the grok-exporterduring the test, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok config file '' 'config.yml' '' created specified above in the same folder. In the config.yml file, change the input.path to '' '/grok/haproxy.log' '' so that where the grok-exporter processes our test log filecontent is. Then start it the container with a following '' 'docker run' 'command:<Prepre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
</Pre>
<br>
After starting, check in log Check the logs and confirm that the grok and rsyslog are actually both have started:<Prepre>
# docker logs grok
  * Starting enhanced syslogd rsyslogd
<br>
Metrics are then available in the browser at http: // localhost: 9144 / metrics:
<pre>
...
<br>
<br>
As a second step, verify that the '' 'rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container with the exec command and look for check the content of the /var/log/messages file:in f (follow) mode.
<pre>
# docker exec -it grok /bin/bash
<br>
Most az anya gépről a Now, on the host machine, use the '''logger''' paranccsal küldjünk egy command to send a log üzenetet a konténerben futó message to the container running rsyslog szervernek az server on port 514-es porton:
<pre>
# logger -n localhost -P 514 -T "this is the message"
</prePre>(T=TCP)
Ekkor a The log meg kell jelenjen a should then appear in the '''syslog''' fájlbanfile:
<pre>
Aug 8 16:54:25 dell adam this is the message</prePre>
Törölhetjük a lokális You can delete the local docker konténertcontainer.
<br>
<br>
===Távoli Remote build===Fel szeretnénk tölteni az elkészült docker We have to to upload our custom grok Docker image-t a to the minishfit saját registry-ébe. Ehhez az To do so, you need to build the image-t a with the minishfit VM lokális 's local docker démonjával kell build-elnidaemon, mert csak onnan ehet hozzáférni a since you can only access the minishfit registry-hezfrom the VM so uploading images is only possible from the VMs local registry. <br>Részletek ittDetails can be found here: [[Openshift_basics#Minishfit_docker_registry|➲Image push a to minishift docker registriy-be]]
 Ahhoz hogy az '''We need special rights for accessing the minishift registry even from the VM running the minishfit cluster. In the example we always log in to minishfit as admin''' user-nek legyen joga feltölteni az image-t a minisfhit registry-be a '''default''' névtérbe, ahol a router is fut, szüksége hogy megkapja a '''so we are going to extend the admin user with the cluster-admin''' jogotrole, that has sufficient rights for uploading images to the minishift registry. FontosFor extending our user roles we have to log into the system namespace, hogy '''-u system:admin''' -al lépjünk be ne csupán 'so always include the namespace name in the 'oc login'''-al, mert akkor nem lesz jogunk az '''oc adm''' parancsot kiadni. Ugyan így fogunk hivatkozni a user-re is az '''--as''' paraméterbencommand.
<pre>
# oc login -u system:admin
# oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin
cluster role "cluster-admin" added: "admin"
</prePre>{{note|Ha ezt a hibát kapjuk If we get this error '''Error from server (NotFound): the server could not find the requested resource''', ez azt jelenti, hogy az '''it probably means that our oc''' kliens programunk régebbi mint a client is older than OpenShift verzióversion}}
Irányítsuk át a lokális Redirect our local docker kliensünket a client to the docker daemon running on the minisfhit VM-en futó docker démonra, majd jelentkezzünk be a and log into the minishift docker registry-be:
<pre>
# minishift docker-env
# eval $(minishift docker-env)
# oc login
Username: admin
Password: <admin>
# docker login -u admin -p $(oc whoami -t) $(minishift openshift registry)
Login Succeeded
</prePre>
Build-eljük le a the image on the minishfit VM-en isas well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0 .</prePre>
Lépjünk be a Push the image to the minisfhit docker registry-be majd adjuk ki a '''push''' parancsot. :
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
</prePre>
<br>
==Required Kubernetes objektumokobjects ==
A grokFor the HAproxy-exporter-hez létre fogunk hozni egy we will create a serviceAccount-ot, egy a deployment-et, egy a service-t és egy and a comifMap-et ahol a where we will store the grok-exporter konfigurációját fogjuk tárolniconfiguration. Ezen felül módosítani fogjuk az anyuid nevű In addition, we will extend the '''SecurityContextConstraintsanyuid''' objektumotSecurityContextConstraints object, mivel az because the rsyslog szerver miatt a server requires the grok-exporter konténernek privilegizált módban kell futniacontainer to run in privileged mode.
* haproxy-exporter service account
* scc-anyuid.yaml
A teljes konfigurációt itt tölthetjük leThe full configuration can be downloaded here: [[File:Haproxy-kubernetes-objects.zip]], vagy megtalálható az alábbi or can be found in the git repository-banbelow: https://github.com/berkiadam/haproxy-metrics
<br>
<br>
===Create the ServiceAccount létrehozása===A haproxy-exporter-nek szüksége van egy saját serviceAccount-ra, amire engedélyezni fogjuk a privilegizált (root) konténer futtatást. Erre az rsyslog szervernek van szüksége.
<pre>
# kubectl create serviceaccount haproxy-exporter -n default
serviceaccount/haproxy-exporter created</prePre>
Ennek As a hatáséra a következő result, the following serviceAccount definíció jött létredefinition was created:
<source lang="C++">
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2019-08-10T12:27:52Z"
name: haproxy-exporter
namespace: default
resourceVersion: "837500"
selfLink: /api/v1/namespaces/default/serviceaccounts/haproxy-exporter
uid: 45a82935-bb6a-11e9-9175-525400efb4ec
secrets:
- name: haproxy-exporter-token-8svkx
===Objektumok definiálásaAddition Kubernetes objects===
<br>
<br>
A grokBecause of Haproxy-exporter-ben lévő runs an rsyslog szerver miatt fontosserver, hogy a konténer privilegizált üzemmódban fussonits container must be run in privileged mode. Ehhez az To do this, you need to add the HAproxy-exporter serviceAcccount to the SCC named '''anyuid''' nevű . So we don't need the '''privileged'' SCC-be fel kell venni a haproxy-exporter-hez tartozó serviceAcccount-because the container wants to start as root, we don'tneed to force it by OpenShift configuration, hogy engedélyezzük a we just have to allow it. Without running as root nevében futtatást. Tehát nincs szükség a privileged SCC-re, mert a konténer elve root-ként szeretne indulni. Más különben az rsyslog nem lesz képes létrehozni a socket-eketwill not be able to create sockets. {{warning|Az SCC-k kezeléshez nem elég a developer user mynamespace-re kapott admin rolebindg-ja. Ehhez admin-ként kell bejelentkezni: oc login -u system:admin}} 
<br><br>
Listázzuk ki a SCC-ketLets list the SCCs:
<pre>
# kubectl get SecurityContextConstraints
<br>
 Az The haproxy-exporter service-account must be added to the '''anyuid''' SCC-hez a users szekcióban kell hozzáadni a '''serviceAccountsection of the 'anyuid''-ot az alábbi formábanSCC in the following format:  - system:serviceaccount:<névtérnamespace>:<serviceAccount>
<br>
'''sccScc-anyuid.yaml'''<source lang="C++">
kind: SecurityContextConstraints
metadata:
name  name: anyuid
...
users: - system:serviceaccount:default:haproxy-exporter
...
</source>
Mivel ez már egy létező Since this is an existing '''scc''' és csak egy apró módosítást akarunk rajta eszközölniand we just want to apply some minor changes, ezért helyben is szerkeszthetjükwe can edit it 'on the fly' with the 'oc edit' command:
<pre>
# oc edit scc anyuid
<br>
===objektumok létrehozásacreate the objects ===
<pre>
# kubectl apply -f cm-haproxy-exporter.yaml
configmap/haproxy-exporter created
</pre>
<pre>
# kubectl apply -f deployment-haproxy-exporter.yaml
deployment.apps/haproxy-exporter created
# kubectl rollout status deployment haproxy-exporter -n default
deployment "haproxy-exporter" successfully rolled out
</pre>
<br>
===TesztelésTesting ===
Keressük meg a Find the haproxy-exporter pod-ot majd nézzük meg a and check logs of the pod logját:
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
 * Starting enhanced syslogd rsyslogd    ...done.Starting server on http://haproxy-exporter-744d84f5df-9fj9m:9144/metrics
</pre>
Majd lépjünk be a konténerbe és teszteljük re az Then enter the container and test the rsyslog működésétserver:
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg /bin/bash -n default
</pre>
Majd a Then use the '''logger''' paranccsal küldjünk egy command to send a log üzenetet az message to rsyslog-nak.
<pre>
logger -n localhost -P 514 -T "this is the message"
</pre>
Most listázzuk ki a Now, let's list the contents of the /var/log/messages mappa tartalmátfolder:
<pre>
# cat messages
Aug 28 19:16:09 localhost root: this is the message
</pre>
Lépjünk ki a konténerből, és kérjük le megint a Exit the container and retrieve the pod logjait, hogy megnézzük, hogy az logs again to see if the log has been sent to stdout-ra is kirakta e a logotas well:
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
Starting server on http://haproxy-exporter-647d7dfcdf-gbgrg:9144/metrics2019-08-28T19:16:09+00:00 localhost root: this is the message
</pre>
<br>
==HAproxy konfigurációConfiguration ==
===Környezeti változók beállításaSetting the environment variables ===A In the HAproxy-nak be fogjuk állítani környezeti változónk keresztül a routers, we will set the address of the rsyslog server running in the haporxy-exporter pod-ban futó rsyslog szerver címétvia environment variables. Ehhez első lépésben listázzuk a Let's check first the haproxy-exporter service-t.
<pre>
# kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhaproxy-exporter-service ClusterIP 172.30.213.183 <none> 9144/TCP,514/TCP,514/UDP 15s
..
</pre>
A HAproxy az stores the rsyslog szerver címét a server address in the '''ROUTER_SYSLOG_ADDRESS''' nevű környezeti változóban tárolja (Deployment része)environment variable. Ezt futásidőben át tudjuk írni az We can overwrite this at runtime with the '''oc set env''' paranccsalcommand. A változó átírása után a After rewriting the variable, the pod magától újra fog indulniwill restart automatically.
<pre>
# oc set env dc/myrouter ROUTER_SYSLOG_ADDRESS=172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note|Minishift-en a In minishift, in the router konténerben nem működik a containers the name resolution does not work for Kubernetes service-ek nevére a névfeloldásnames, mivel nem a because it doesn't use the Kubernetes klaszter cluster DNS szerver címe van beállítva, hanem a server but the minishfit VM. Ezért nem tehetünk mástTherefore, mint hogy a all you have to do is enter the service 's IP címét adjuk meg a neve helyettaddress instead of its name. In OpenShift környezetben a , we have to use the name of the service nevét adjuk meg}}
Majd második lépésben állítsuk át debug-ra a logszintet As a second step, change the HAproxy-banlog level to debug, mert csak debug szinten van because it only produces access-login debug level.
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning|Teljesítmény tesztel meg kell vizsgálni hogy mekkora plusz terhelést jelent a Performance test must be carried out to see how much is the extra load when running the haproxy-nak ha in debug módban futmode}}
<br>
A fenti két környezeti változó módosításának a hatására a router konténerben As a result of the modification of the two environment variables, the configuration of HAproxy in '''/var/lib/haproxy/conf/haproxy.config''' fájlban a HAproxy konfigurációja az alábbira változotthas changed to:
<pre>
# kubectl exec -it myrouter-5-hf5cs /bin/bash -n default
$ cat /var/lib/haproxy/conf/haproxy.config
global
..
log   log 172.30.82.232 local1 debug
</pre>
A lényeg, hogy megjelent a log paraméternél a This is the IP address of the haproxy-exporter service címe és a ''', and the log level is debug''' log szint.
<br>
<br>
<br>
===Testing the rsyslog szerver teszteléseserver ===Generáljunk egy kis forgalmat a Generate some traffic through haproxy-n keresztül, majd lépjünk vissza a then go back to the haproxy-exporter konténerbe, és listázzuk a container and list the content of the messages fájl tartalmátfile.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m /bin/bash -n default
#
# tail -f /var/log/messages
Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy fe_no_sni stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy openshift_default stopped (FE: 0 conns, BE: 1 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:dsp:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-alertmanager-jv69s stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_http:mynamespace:prometheus-server-2z6zc stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:mynamespace:test-app-service stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[24]: Proxy be_edge_http:myproject:nginx-route stopped (FE: 0 conns, BE: 0 conns).Aug 9 12:52:17 192.168.122.223 haproxy[32]: 127.0.0.1:43720 [09/Aug/2019:12:52:17.361] public openshift_default/<NOSRV> 1/-1/-1/-1/0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP/1.1"
</pre>
Ha a logjait megnézzük a haproxy-exporter pod-nak, ugyan ezt kell ássuk.
<pre>
...
Aug 9 12:57:21 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:20.636] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 1/0/12/428/440 200 135 - - --II 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:21.075] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 4334/0/0/3021/7354 200 135 - - --VN 2/2/0/1/0 0/0 "GET /test/slowresponse/3000 HTTP/1.1"Aug 9 12:57:28 192.168.122.223 haproxy[32]: 192.168.42.1:48266 [09/Aug/2019:12:57:28.430] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.17:8080 90/0/0/100/189 404 539 - - --VN 2/2/0/1/0 0/0 "GET /favicon.ico HTTP/1.1"Aug 9 12:57:35 192.168.122.223 haproxy[32]: 192.168.42.1:48268 [09/Aug/2019:12:57:20.648] public public/<NOSRV> -1/-1/-1/-1/15002 408 212 - - cR-- 2/2/0/0/0 0/0 "<BADREQ>"
</pre>
===Testing the grok-exporter tesztelésecomponent ===Kérjük le a grokPlease open the gork-exporter metrikákat a metrics at http://<pod IP>:9144/metrics címen. Vagy a You can open this URL either in the haproxy-exporter pod-ban itself with on localhost hívással, vagy bármelyik másik or in any other pod-ban a using the haporxy-exporter pod 's IP címét felhasználvaaddress. Az alábbi példában a test-app-ba lépek be. Látnunk kell a metrikák között a We have to see the '''haproxy_http_request_duration_seconds_bucket''' histogramothistogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 /bin/bash -n mynamespace$
$ curl http://172.30.213.183:9144/metrics
...
# HELP haproxy_http_request_duration_seconds The request durations of for the applications running in openshift openhift that have route defined.
# TYPE haproxy_http_request_duration_seconds histogram
haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.1"} 0haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.2"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="0.4"} 1haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="1"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="3"} 2haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="8"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="20"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="60"} 3haproxy_http_request_duration_seconds_bucket{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="120"} 3haproxy_http_request_duration_seconds_bucket{haproxy={ "haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service",le="+Inf"} 3haproxy_http_request_duration_seconds_sum{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 7.9830000000000005haproxy_http_request_duration_seconds_count{haproxy="haproxy[32]",namespace="mynamespace",pod_name="test-app-57574c8466-qbtg8",service="test-app-service"} 3
</pre>
<br>
==Prometheus beállításokSettings ==
===Statikus konfigurációStatic configuration ===<source lang="C++">     - job_name: grok-exporter scrape_interval      scrape_interval: 5s metrics_path      metrics_path: /metrics static_configs      static_configs:       - targets: ['grok-exporter-service.default:9144']
</source>
=== Pod Level Data Collection ===
===Pod szintű adatgyűjtés===We want the haproxy-exporter pods to be scalable. This requires that the Prometheus does not scrape the metrics through the service (because it does loadbalancing), but from the pods directly. So Prometheus must query the Endpoint definition assigned to the haproxy-exporter service from the Kubernetes API, which contains the list of IP addresses the pods. We will use the '''kubernetes_sd_configs'' element to achieve his. (This requires Prometheus to be able to communicate with the Kubernetes API. For details, see [[Prometheus_on_Kubernetes]])
Azt szeretnénk, hogy a haproxy-exporter podok skálázhatóak legyenek. Ehhez az kell, hogy a prometheus ne a service-en keresztül kérje le a metrikát (mert akkor a service loadbalancing-ot csinál) hanem közvetlenül a pod-okat szólítsa meg. Ehhez az kell, hogy a prometheus a Kubernetes API-n keresztül kérje le a haproxy-epxporter-hez tartozó '''Endpoint'''-ot, ami tartalmazza a service-hez tartozó podok ip címének a listáját. Ehhez a prometheus '''kubernetes_sd_configs''' elemét fogjuk használni. (Ennek előfeltétele, hogy a Prometheus képes legyen kommunikálni a Kubernetes API-val. Részleteket lásd itt: [[Prometheus_on_Kubernetes]])
 A When using '''kubernetes_sd_configs''' használatakor mindig egy adott Prometheus always gets a list of a specific Kubernetes objektum listát kérünk le a szerverről objects from the API (node, service, endpoints, pod), majd a kapott listából megkeressük azt az erőforrást, amiből be akarjuk gyűjteni a metrikákatand then it identifies those resources according to its configuration from which it wants to collect the metrics. Ezt úgy tesszük meg hogy a In the ''''relabel_configs''' szekcióban majd szűrőfeltételeket írunk föl az adott Kubernetes resource címkéiresection of Prometheus configuration we will define filter conditions for identifying the needed resources. Jelen esetben a In this case, we want to find the endpoint belonging to the haproxy-exporter-hez tartozó Endpoint-ot akarjuk megtalálniservice, mert az alapján a because it allows Prometheus meg tudja találni az összes a to find all the pods for the service-hez tartozó pod-ot. Tehát a címék alapján meg akarjuk majd találni egyrészt azt az So, based on Kubernetes labels, we want to find the endpoint-ot, amit that is called '''''haproxy-exporter-service'''-nak hívnak, ezen felül van egy and has a port called '''metrics''' portja, amin keresztül a through which Prometheus képes lekérni a metrikákatcan scrape the metrics. Az alapértelmezett In Prometheus, the default scrape URL a is '''/metrics''', tehát ezt külön nem kell definiálni, a grok-exporter is ezt használjaso we don't have to define it implicitly.
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
kind: Endpoints
metadata:
name  name: haproxy-exporter-service
...
ports  ports:   - name: log-udp port    port: 514 protocol    protocol: UDP   - name: metrics port    port: 9144 protocol    protocol: TCP   - name: log-tcp port    port: 514 protocol    protocol: TCP
</pre>
Két címkét keresünk az We are looking for two labels in the Endpoints listábanlist:
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
A The config-map that describes proetheus.yaml-t, vagyis a prometheus.yaml-t leíró config-map-et az alábbiakkal kell kiegészítenishould be extended with the following: <source lang="C++">     - job_name: haproxy-exporter scheme      scheme: http tls_config      tls_config: ca_file        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server_name        server_name: router.default.svc bearer_token_file      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs      kubernetes_sd_configs:       - role: endpoints namespaces        namespaces: names          names:           - default relabel_configs      relabel_configs:       - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action        action: keep regex        regex: haproxy-exporter-service;metrics</sourceSource>
Töltsük újra a Reload configMap-et:
<pre>
# kubectl apply -f cm-prometheus-server-haproxy-full.yaml
</pre>
Majd várjuk meg, hogy a Wait for Prometheus újra olvassa a konfigurációs fájltto read the configuration file again:
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
...
level=info ts=2019-07-22T20:25:36.016Z caller=main.go:730 msg="Loading configuration file" filename=/etc/config/prometheus.yml
</pre>
<br>
Majd a Then, on the http://mon.192.168.42.185.nip.io/targets képernyőn ellenőrizzükscreen, hogy eléri e a verify that Prometheus a can scrape the haproxy-exporter target-et: :[[File:ClipCapIt-190809-164445.PNG]]
<br>
===scaling haproxy-exporter skálázása===
<pre>
# kubectl scale deployment haproxy-exporter --replicas=2 -n defaultdeployment.extensions/haproxy-exporter scaled
</pre>
<pre>
# kubectl get deployment haproxy-exporter -n default
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEhaproxy-exporter 2 2 2 2 3h
</pre>
<br>
:[[File:ClipCapIt-190809-174825.PNG]]
<br>
==Metrika fajtákMetric types ==
===haproxy_http_request_duration_seconds_bucket===
type: histogram
<br>
===haproxy_http_request_duration_seconds_bucket_count===type: counter<br>Az összes darabszáma az adott histogramba eső request-ek számánakThe total number of requests in that histogram
<pre>
haproxy_http_request_duration_seconds_count{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"} 5
</pre>
<br>
<br>
===haproxy_http_request_duration_seconds_sum===type: counter<br>A válaszidők idejének összege az adott hisztogrambanThe sum of the response times in a given histogram. Az előző példa alapján összesen 5 kérés jöttBased on the previous example, és there were a kiszolgálási idő összeadva total of 5 requests and the summ serving time was 13 s volt.
<pre>
haproxy_http_request_duration_seconds_sum{haproxy={ "haproxy[39]",jobJob ="haproxy-exporter",namespace="mynamespace",pod_name="testapp-appbody",service="testbody-app-service"13663} 13.663
</pre>
<br>
=OpenShift router + rsyslog=
Starting with OpenShift 3.11-től kezdődően lehet olyan , it is possible to fire up a router-t definiálni, hogy az OpenShfit automatikusan elindít egy that will contain a side car rsyslog konténert a container in the router pod-ban és be is állítja, hogy a and configure HAproxy egy socket-en keresztül (to send logs to the rsyslog server via an emptyDir volume) elküldje a logokat az rsyslog szervernek, ami az which writes them to stdout-ra írja azokat alapértelmezettenby default. Az The configuration of rsyslog konfigurációja egy is in a configMap-ban van.
:[[File:ClipCapIt-190810-164907.PNG]]
<br>
A You can create a router-t syslogserverrel a with syslogserver using the '''--extended-logging''' kapcsolóval hozhatjuk létre az switch in the '''''oc adm router''' paranccsalcommand.
<pre>
# oc adm router myrouter --extended-logging -n default
info: password for stats user admin has been set to O6S6Ao3wTX
--> Creating router myrouter ... configmap     configmap "rsyslog-config" created warning    warning: serviceaccounts "router" already exists clusterrolebinding    clusterrolebinding.authorization.openshift.io "router-myrouter-role" created deploymentconfig    deploymentconfig.apps.openshift.io "myrouter" created service     service "myrouter" created--> Success
</pre>
<br>
Kapcsoljuk be a Turn on debug szintet a level loging in HAproxy-ban:
<pre>
# oc set env dc/myrouter ROUTER_LOG_LEVEL=debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
<br>
Két konténer van az új There are two containers in the new router pod-ban:
<pre>
# kubectl describe pod/myrouter-2-bps5v -n default
..
Containers:
router  router: Image    Image: openshift/origin-haproxy-router:v3.11.0 Mounts    Mounts:       /var/lib/rsyslog from rsyslog-socket (rw)
...
syslog  syslog: Image    Image: openshift/origin-haproxy-router:v3.11.0 Mounts    Mounts:       /etc/rsyslog from rsyslog-config (rw)       /var/lib/rsyslog from rsyslog-socket (rw)
...
rsyslog  rsyslog-config: Type    Type: ConfigMap (a volume populated by a ConfigMap) Name    Name: rsyslog-config Optional    Optional: false rsyslog  rsyslog-socket: Type    Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium    Medium: SizeLimit    SizeLimit: <unset>
</pre>
Láthatjuk, hogy mind két konténerbe mount-olva van You can see that the '''/var/lib/rsyslog/''' mappafolder is mounted in both containers. A HAproxy konfigurációs fájljában ide fogja létrehozni az The rsyslog.sock fájltfile will be created here.
<br>
<br>
===router konténercontainer ===Ha belépünk a When we enter the router konténerbecontainer, láthatjuk, hogy már fel is nyalta a konfigurációwe can see that the configuration has already been modified:
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c routerbash-4.2$ cat /var/lib/haproxy/conf/haproxy.config
global
...
log   log /var/lib/rsyslog/rsyslog.sock local1 debug
...
defaults
...
option   option httplog --> Enable logging of HTTP request, session state and timers
...
backend be_edge_http:mynamespace:test-app-service
</pre>
<br>
<br>
===rsyslog konténercontainer ===
<pre>
# kubectl exec -it myrouter-2-bps5v /bin/bash -n default -c syslog
$ cat /etc/rsyslog/rsyslog.conf $ModLoad imuxsock$SystemLogSocketName /var/lib/rsyslog/rsyslog.sock$ModLoad omstdout.so*.* :omstdout:
</pre>
<br>
Ha át akarjuk konfigurálni az If you want to reconfigure rsyslog-ot hogy küldje el a logokat pl a to send logs to e.g, logstash-nek, akkor csak a then you only need to rewrite the configMap-et kell átírni. Alapértelmezetten csak az By default, rsyslog only writes to stdout-ra írja amit kap.
<pre>
# kubectl get cm rsyslog-config -n default -o yaml
apiVersion: v1
data:
rsyslog  rsyslog.conf: |     $ModLoad imuxsock     $SystemLogSocketName /var/lib/rsyslog/rsyslog.sock     $ModLoad omstdout.so     *.* :omstdout:
kind: ConfigMap
metadata:
name  name: rsyslog-config namespace  namespace: default
</pre>
<br>
<br>
===Viewing HAproxy logok nézegetéseLogs ===
<pre>
# kubectl logs -f myrouter-2-bps5v -c syslog
</pre>