Changes

Openshift - HAproxy metrics EN

128 bytes added, 10:42, 25 January 2020
no edit summary
[[Openshift - HAproxy metrics|Openshift - HAproxy metrics HU]]
:[[File:ClipCapIt-190807-102633.PNG]]
router ClusterIP 172.30.130.191 <none> 80/TCP,443/TCP,1936/TCP 4d
</pre>
You can see that there is an extra port listed upon the default 80 and 433, that which is the '''1936''',that is the port of the metrics endpoint.
==Prometheus integration==
Lets examine the '''Endpoint''' definition of the HAproxy router. Based on that, we can create the Prometheus configuration that will be responsible for finding runtime all the pods running HAproxy instances. We have to find endpoints the OpenShift endpoint object with the name '''router''' that have a port definition called '''1936-tcp'''. Prometheus will extract the port number for the metrics query form this port-definition (/ metrics).
<pre>
# kubectl get Endpoints router -n default -o yaml
Let's look at into the logs in of the side card container running in the Promethues pod (responsible for reloading the configuration).
<pre>
# kubectl logs -c prometheus-server-configmap-reload prometheus-server-75c9d576c9-gjlcr -n mynamespace
</pre>
Let's Lets check the Prometheus logs as well:
<pre>
# kubectl logs -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
Next, open the Promethues console and navigate to the 'target ' page on the console: http://mon.192.168.42.185.nip.io/targets
[[File: ClipCapIt-190722-233253.PNG]]<br>
If there were more routers in the cluster, they would be all appear here listed as separate endpoints.
<br>
<br>
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-origin/glusterfs-review/architecture/networking/haproxy-router.html <br>
At first glance, there are two meaningful metrics provided by the HAproxy. These arethe following:
<br>
<br>
=Collecting metrics from the access logs=
==Overview==
The task is to process the access log of HAproxy with a log interpreter parser and generate Prometheus metrics that are available for Prometheus through an HTTP endpontendpoint. We will use the grok-exporter tool, which can do both. It can read logs from a file or stdin and generate metrics based on the logs. The grok-exporter will receive the logs from HAproxy via an rsyslog server. Rsyslog puts will put logs into a file from which grok-exporter will be able to read them. Grok-exporter converts logs into promethues metrics.
Necessary steps:
* You need We have to create a docker image from grok-exporter that has rsyslog in the image. (The container must be able to run the rsyslog server as root, which requires extra openShfit configuration)* The grok-exporter configuration will be placed in a OpenShfit ConfigMap and the rsyslog workspace must be an OpenShift volume. (writing a containers file system in runtime is really inefficient)
* We have to create a ClasterIP-type service that can perform load-balancing between grok-exporter pods.
* Routers (The HAproxy) routers should be configured to write access logs in debug mode and send them to the remote rsyslog server running next to the grok-exporter. * The rsyslog server running in the grok-exporter pod will both write the received HAproxy access logs into the file ('''/var/log/messages''' (- emptyDir type volume) and sends it them to '''stdout '''as well for central log processing.* Logs written to stdout will be picked up by the docker-log-driver as well and forwarded to the centralized log architecture (log retention) * The grok-exporter program reads '''/var/log/messages''', and generates prometheus Prometheus metrics from its the HAproxy access-logs.* The Promethues have Prometheus scrape config has to be configured to use extended with a '''kubernetes_sd_configs''' to section. Prometheus must collect the metrics directly collect metric from the grok-exporter pods, not through the Kubernetes service to bypass load-balancing
<br>
<br>
==introduction of grok-exporter introduction==Grok-exporter is a tool that can process logs based on regular expressions and convert them to produce one of the 4 basic types of prometheus Prometheus metrics:
* gauge
* counter
* kvantilis
Grok-exporter is based on the implementation of '''logstash-grok'' ', and grok-exporter is using patterns and functions defined in for logstash.
Detailed documentation: <br>
The grok-exporter can read form three types of input sources:
* '''file''': we will stick to this
* '''webhook''': This solution could also be used with logstash used as rsyslog server. Logstash can send the logs to the grok-exporter webhook with the logstash plugin "http-output"* '''stdin''': With rsyslog, stdin can also be used. This requires the use of the '''omprog''' program, that reads can read data from a stocket sockets and passes it pass on data through stdin: https://www.rsyslog.com/doc/v8-stable/configuration/modules/omprog.html
=== Alternative Solutions ===
'''Fluentd''' <br>
We To achieve the same goal with fluentd, we would need three fluentd plugins (I haven't tried this):
* fluent-plugin-rewrite-tag-filter
* fluent-plugin-prometheus
'''mtail''':<br>
The other alternative solution would be google's '''mtail''', which is smore said to be more efficient in processing logs than the grok engine.<br>
https://github.com/google/mtail
* global:
* input: Tells you where and how to retrieve logs. Can be stdin, file and webhook. We will use file input.
* grok: Location of the grok patterns. Pattern definition will be are stored in /grok/patterns folderby default.
* metrics: This is the most important part. Here you need to define the metrics and the associated regular expression
* server: What Contains the port of the http metrics server should listen to.
<br>
====Metrics====
Metrics must be defined by metric types. The four basic types of prometheus Prometheus metrics are supported: '''Gauge, Counter, Histogram, Summary''' (quantile)
<br>
Each definition contains 4 parts:
* name: This will be the name of the metric
* help: This will be is the help text for the metric.* match: Describe Describes the structure of the log string in a regular expression styleformat. Here you can use pre-defined grok patterns:
** '''BASIC grok patterns''': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
** '''HAROXY patterns''': https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/haproxy
* label: You Here we can name add Prometheus labels to the result groups. The name can be referenced in the label section, which will create a label whose value will be the parsed datametrics.
<br>
==== match definition====Grok assumes that each element is separated by a single space in the source log files. In the match section, you have to write a regular expression using grok building blocks. Each building block has the format: '''% {PATTERN NAMEPATTERN_NAME}''' where PATTERN NAME PATTERN_NAME must be an existing pattern predefined in grokpattern. The most common type is '''% {DATA}''', which refers to an arbitrary data structure that contain contains no withe-space. There are several compound patterns that are build up from other basic grok patterns. If you want We can assign the regular expression described by result groups to named variables that can be used as the value of the Prometheus metric or as label values. The variable name must be placed inside the curly bracket of the pattern to be separated by a result group, you must semicolon from the patter name the patterns, for example:
<pre>
% {DATA:this_is_the_name} this_is_the_name
</Pre>
The result of the regular expression will be assigned to the variable '''this_is_the_name''', which can be referenced when defining the value of the Prometheus metric or the metrics label.
<br>
==== labels definition ====You In the label section we can refer to patterns named in define labels for the generated Prometheus metric. The labels section. This will give are defined with a name:value list, where the value of the field parsed from the given log can be a string to constant or a variable defined for a pattern in the defined labelmatch section. The variable must be referenced in go-template style between double curly brackets starting with a dot. For example, using if we used the '''% {DATA: this_is_the_name}''' patternin the match section, you could write we can define the 'mylabel' Prometheus label with the value of the 'this_is_the_name' variable in the following tagway: <br>
<pre>
mylabel: '{{.this_is_the_name}}'
</Pre>
Then, if Lets assume that the field described by the% {DATA} pattern was 'this_is_the_name' variables value is 'myvalue', then . Then the metric would be labeled with receive the followinglabel: '''{mylabel = "myvalue"}''' <br>Let's look at an We are going to demonstrate a full, metric definition example: in the following section <br>
The following log line is given:
<pre>
7/30/2016 2:37:03 PM adam 1.5
</Pre>
And there is given the following metric rule definition in the grok config:
<source lang="C++">
metrics:
user: '{{.user}}'
</source>
The Here is the finale metric will be named '''grok_example_lines_total'''. The provided by the grok-exporter metrics will beendpoint:
<pre>
# HELP Example counter metric with labels.
<br>
==== Determine Value of the value of a metric ====For a counter-type metric, you do not we don't need to determine the value of the metric, because as it will just simply count the number of matching logs foundmatches of the regular expression. In contrast, for all other types, you we have to specify what is considered a the value. This should It has be specified defined in the '''value''' section, where a named grok pattern from of the match section must metric definition. Variables can be referenced in the same way as Go templates as defined we saw it in in the tagslabel definition chapter, in go-template style. Eg the Here is an example. The following two log lines are given:
<pre>
7/30/2016 2:37:03 PM adam 1
7/30/2016 2:37:03 PM Adam 5
</Pre>
And for this we define the following histogram, which consists of two buckets, buckets bucket 1 and 2:
<source lang="C++">
metrics:
<br>
==== Functions ====
You can apply functions to the values ​​of the metric (values) and to the tags. Functions must be were introduced in grok-exporter version '''0.2.7''' or later. We can apply functions to metric value and to the value of its labels. String manipulation functions and arithmetic functions can are also be usedavailable. The following two arguments arithmetic functions are supported:
* add
* subtract
* multiply
* divide
The function has Functions have the following syntax: <pre> {{FUNCTION_NAME ATTR1 ATTR2}} </pre> where ATTR1 and ATTR2 can be either a value derived from a pattern natural number or a natural numbervariable name. The values ​​obtained from the pattern should be written in the same wayvariable name must start with a dot. Eg if we use Here is an example using the multiply function in on the the 'grok_example_lines' metric definition form the example above:
<source lang = "C ++">
          value: "{{multiply .val 1000}}"
</source>
Then the metric changes toThe outcome would be:
<pre>
# HELP Example counter metric with labels.
...
</Pre>
Since the two values ​​will ​​would change to 1000 and 5000 respectively, both will fall into the infinite categorybucket.
<br>
<br>
== Creating a the grok config file ==You need We have to compile a grok pattern that fits in the HAproxy access-log lines and can extract all the attributes that are important to usrequired for creating the response-latency-histogram based on the HAproxy access-logs. The required attributes are the following:
* total response time to respond
* haproxy instance id
* openshfit service namespace
</Pre>
In the config.yml file, we will define a histogram that contains the classic response -time for full requests. This is a classic -latency histogram, that usually containing contains the following buckets (in seconds):
<pre>
[0.1, 0.2, 0.4, 1, 3, 8, 20, 60, 120]
</Pre>
Response time histogram metrics by convention are called : '''<name prefix> _http_request_duration_seconds'''
'''config.yml'''
</source>
Explanation:
* '''type:file''' -> read logs from file
* '''path: /var/log/messages''' -> The rsyslog server writes logs to /var/log/messages by default
* '''readall: true''' -> always reads the entire log file. This should only be used for testing, in a live environment, and should this always has to be set to false.* '''patterns_dir: ./patterns''' -> Pattern Base directory of the pattern definitions can be found in the docker image* <pre> value: "{{divide .Tt 1000}}" </pre> The serving response time in the HAproxy log is in milliseconds and must be converted so we convert it to seconds.* '''port: 9144''' -> This The http port will provide of the /metrics endpoint.
<br>
{{warning | do Do not forget to set the value of '''readall'' 'to''' false '' in a live environment as this will greatly reduce efficiencyit can significantly degrade performance}}
<br>
<br>
=== '''Online grok tester ===testers'''<br>There are several online grok testing tools. These can be used help to compile the required grok pattern expression very effectively. Try this: https://grokdebug.herokuapp.com/
:[[File:ClipCapIt-190808-170333.PNG]]
<br>
== making building the docker image ==The grok-exporter docker image is available on the docker hub in several versionsvariants. The only problem with them is that they do not include the rsyslog server, what we need is for the HAproxy to send logs directly to the grok-exporter podokankpod. <br>
docker-hub link: https://hub.docker.com/r/palobo/grok_exporter <br>
<br>
The second problem is that they are all based on an ubuntu base image, where that makes it is very difficult to get rsyslog to log on to stdout(ubunto doesn't support loggin to stdout), which requires required by the Kubernetets centralized log collector system. We are going to receive HAproxy logs, so both monitoring and centralized logging can be served. Thousands of port the original grok Dockerfile will be ported to '''centos 7''' base image and will be supplemented with the add rsyslog installation of to the rsyslog servernew image.
<br>
All necessary files are available on under my git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>I also created an ubuntu based solution, which is an extension of the original docker-hub solutionversion, which can also be found on git-hub in the '''grok-exporter-ubuntu folder'''. For In the rest of the howotthis chapter, we will always are going to use the cent centOS version.
<br>
<br>
=== Dockerfile ===
We will start with modify the official '''palobo / grok_exporter''' Dockerfile, but we will complement extend it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for the build of the Docker image build]]
<br>
CMD sh -c "nohup /usr/sbin/rsyslogd -i ${PID_DIR}/pid -n &" && ./grok_exporter -config /grok/config.yml
</source>
{{note | It is important that we to use at least grok-exporter version 0.2.7 of grok-exporteror higher, the function handling first appearedas functions were introduced in this version}}
<br>
<br>
The '''rsyslog.conf''' file must be accompanied by include at least the following, which allows you to receive logos that enables receiving logs on port 514 on over both UDP and TCP (see zip above for details), and that write all . The logs are written to stdout and to /var/log/messages.
<pre>
$ModLoad omstdout.so
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this it directly on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from therethe VM. Since , at the and, as we will be uploading upload the image to a remote (not local) docker repository, it is important to follow the naming conventions:
<pre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</pre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placeddeployed.
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
The resulting image can be easily tested by running a native, local dockerlocally. Create a haproxy test log file ('''haproxy.log''') with the following content in it. This will be processed by the grok-exporterduring the test, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok config file '''config.yml''' created specified above in the same folder. In the config.yml file, change the input.path to '''/grok/haproxy.log''' so that where the grok-exporter processes our test log filecontent is. Then start it the container with a following '''docker run' 'command:
<pre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
<br>
After starting, check in log Check the logs and confirm that the grok and rsyslog are actually both have started:
<pre>
# docker logs grok
<br>
Metrics are then available in the browser at http: // localhost: 9144 / metrics:
<pre>
...
<br>
<br>
As a second step, verify that the '''rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container with the exec command and look for check the content of the /var/log/messages file:in f (follow) mode.
<pre>
# docker exec -it grok /bin/bash
<br>
Now, from on the mother host machine, use the '''logger''' command to send a log message to the container running rsyslog server on port 514:
<pre>
# logger -n localhost -P 514 -T "this is the message"
<br>
=== Remote build ===
We would like have to to upload the completed docker our custom grok Docker image to the minishfit's own registry. To do thisso, you need to build the image with the minishfit VM 's local docker daemon, since you can only access the minishfit registry from therethe VM so uploading images is only possible from the VMs local registry. <br>Details at can be found here: [[Openshift_basics # Minishfit_docker_registry | ➲Image push to minishift docker registriy]] 
In order We need special rights for accessing the '''' admin '' user to have minishift registry even from the right to upload VM running the image to minishfit cluster. In the minisfhit registry example we always log in the '''default' 'namespace where the router is runningto minishfit as admin user, you need so we are going to get extend the''' cluster- admin ''' rights. It is important to log in user with '''the cluster-u system: admin''' and not just' 'oc login' 'role, as you will not have the right that has sufficient rights for uploading images to issue the command''' oc adm '''minishift registry. In For extending our user roles we have to log into the same waysystem namespace, we will refer to so always include the user namespace name in the 'oc login''--as''' parametercommand.
<pre>
# oc login -u system: admin
# oc adm policy add-cluster-role-to-user cluster-admin admin --as = system: admin
cluster role "cluster-admin" added: "admin"
</Pre>
{{note | If we get this error '''Error from server (NotFound): the server could not find the requested resource'' ', it probably means that our oc client''' oc '''is older than OpenShift version}}
Build it in the image on the minishfit VM as well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
Log in Push the image to the minisfhit docker registry and type '''push'''.:
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
<br>
== Kubernet Required Kubernetes objects ==
For grokthe HAproxy-exporter we will create a serviceAccount, a deployment, a service and a comifMap where we will store the grok-exporter configuration. In addition, we will modify extend the object '''SecurityContextConstraintsanyuid''' named anyuidSecurityContextConstraints object, because the rsyslog server requires the grok-exporter container to run in privileged mode.
* haproxy-exporter service account
<br>
<br>
=== Create the ServiceAccount ===The haproxy-exporter needs its own serviceAccount, which we will allow to run the privileged (root) container. This is what the rsyslog server needs.
<pre>
kind: ServiceAccount
metadata:
creationTimestamp: "2019-08-10T12:27:52Z"
name: haproxy-exporter
namespace: default
resourceVersion: "837500"
selfLink: /api/v1/namespaces/default/serviceaccounts/haproxy-exporter
uid: 45a82935-bb6a-11e9-9175-525400efb4ec
secrets:
- name: haproxy-exporter-token-8svkx
===Objektumok definiálásaAddition Kubernetes objects===
<br>
<br>
Because of the Haproxy-exporter runs an rsyslog server in grok-exporter, it is important that the its container runs must be run in privileged mode. To do this, you need to add the serviceAcccount belonging to the haproxyHAproxy-exporter serviceAcccount to the SCC named ''' anyuid '' to enable running on behalf of the root'. So you we don't need the '''privileged '' SCC because the container principle wants to start as root, we don't need to force it by OpenShift configuration, we just have to allow it. OtherwiseWithout running as root, rsyslog will not be able to create sockets.{{warning | Admin admin rolebindg for developer user mynamespace is not enough to handle SCCs. You need to log in as an admin to do this: oc login -u system: admin}}
<br><br>
<br>
 To The haproxy-exporter service-account must be added to the 'anyuid' 'SCC, users''' sectionof the ' serviceAccount anyuid'' must be added SCC in the following format:
 - system: serviceaccount: <namespace>: <serviceAccount>
<br>
- system: serviceaccount: default: haproxy-exporter
...
</ Sourcesource>
Since this is an existing '''scc''' and we just want to make apply some minor changes to it, we can edit it locally'on the fly' with the 'oc edit' command:
<pre>
# oc edit scc anyuid
<br>
=== create the objects ===
<pre>
<br>
=== Testing === Testing
Find the haproxy-exporter pod and look at check logs of the pod log:
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
Then enter the container and test the rsyslog functionserver:
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg / bin / bash -n default
</pre>
Now, let's list the contents of the / var / log / messages folder:
<pre>
# cat messages
</pre>
Exit the container and retrieve the pod logs again to see if the log has been sent to stdoutas well:
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
=== Setting the environment variables ===For In the HAproxyrouters, we will set the address of the rsyslog server running in the haporxy-exporter pod via our environment variablevariables. To do this, we Let's check first list the haproxy-exporter service.
<pre>
# kubectl get svc -n default
HAproxy stores the rsyslog server address in the environment variable '''ROUTER_SYSLOG_ADDRESS''' (part of Deployment)environment variable. We can rewrite overwrite this at runtime with the command '''oc set env'''command. After rewriting the variable, the pod will restart automatically.
<pre>
# oc set env dc / myrouter ROUTER_SYSLOG_ADDRESS = 172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note | In minishift, in the router container containers the name resolution does not work with name resolution for servicesKubernetes service names, because it is not doesn't use the Kubernetes cluster DNS server address but the minishfit VM. Therefore, all you have to do is enter the service's IP address instead of its name. In OpenShift, we enter have to use the name of the service}}
Then, in the As a second step, change the HAproxy log level in to debug to HAproxy, because you it only have produces access to the log in debug level.
<pre>
# oc set env dc / myrouter ROUTER_LOG_LEVEL = debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning | Performance test must be carried out to see how much is the extra load a haproxy has when running the haproxy in debug mode}}
<br>
As a result of modifying the above modification of the two environment variables, the configuration of HAproxy in the router container in file '''/var/lib/haproxy/conf/haproxy.config''' has changed to:
<pre>
  log 172.30.82.232 local1 debug
</pre>
The important thing This is that the IP address of the haproxy-exporter service address , and the log level '''is debug' 'have appeared in the log parameter.
<br>
<br>
<br>
=== Testing the rsyslog server ===Generate some traffic through haproxy, then go back to the haproxy-exporter container and list the contents content of the messages file.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m / bin / bash -n default
#
# tail -f / var / log / messages
Aug 9 12:52:17 192.168.122.223 haproxy [24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).
Aug 9 12:52:17 192.168.122.223 haproxy [32]: 127.0.0.1:43720 [09 / Aug / 2019: 12: 52: 17.361] public openshift_default / <NOSRV> 1 / -1 / -1 / -1 / 0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP / 1.1"
</pre>
If you look at your logs for the haproxy-exporter pod, you have to dig this one out.
</pre>
=== Testing the grok-exporter component ===Please download open the grokgork-exporter metrics at http: // <pod IP>: 9144 / metrics. Either You can open this URL either in the haproxy-exporter pod itself with a on localhost call or in any other pod using the haporxy-exporter pod 's IP address. In the example below, I enter the test-app. We need have to see the '''haproxy_http_request_duration_seconds_bucket ''' histogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 / bin / bash -n mynamespace
$
$ curl http://172.30.213.183:9144/metrics
      static_configs:
      - targets: ['grok-exporter-service.default: 9144']
</ Sourcesource
=== Pod Level Data Collection ===
We want the haproxy-exporter pods to be scalable. This requires that the prometheus Prometheus does not retrieve scrape the metrics through the service (because it does service loadbalancing), but addresses from the pods directly. To do this, the prometheus So Prometheus must get through the Kubernetes API query the '''Endpoint''' of definition assigned to the haproxy-epxporterexporter service from the Kubernetes API, which contains the list of ip IP addresses for the service's podcastspods. We will use the '''kubernetes_sd_configs'' element of prometheusto achieve his. (This requires that Prometheus to be able to communicate with the Kubernetes API. For details, see [[Prometheus_on_Kubernetes]])
When using '''kubernetes_sd_configs''' we Prometheus always get gets a list of a specific Kubernetes object objects from the server API (node, service, endpoints, pod) and then look up the resource it identifies those resources according to its configuration from which we want it wants to collect the metrics. We do this by going to In the '''' relabel_configs''' section and then applying of Prometheus configuration we will define filter conditions to for identifying the tags of the given Kubernetes resourceneeded resources. In this case, we want to find the endpoint belonging to the haproxy-exporterservice, because it allows Prometheus to find all the pods for the service. So, based on the tagsKubernetes labels, we will want to find the endpoint that is called '''' 'haproxy-exporter-service''' and also has a port called '''metrics'' port through which Prometheus can retrieve scrape the metrics. The In Prometheus, the default scrape URL is '''/ metrics''', so you we don't have to define it separately, it is used by grok-exporterimplicitly.
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
We look are looking for two tags labels in the Endpoints list:
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
The config-map that describes proetheus.yaml, that is, prometheus.yaml, should be completed extended with the following:
<source lang = "C ++">
    - job_name: haproxy-exporter
</pre>
We will wait Wait for Prometheus to read the configuration file again:
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
<br>
Then, on the http://mon.192.168.42.185.nip.io/targets screen, verify that Prometheus reaches can scrape the haproxy-exporter target:
[[File: ClipCapIt-190809-164445.PNG]]
<br>
=== scaling haproxy-exporter scaling ===
<pre>
<br>
== Metric varieties types ==
=== haproxy_http_request_duration_seconds_bucket === haproxy_http_request_duration_seconds_bucket
type: histogram
<br>
=== haproxy_http_request_duration_seconds_bucket_count === haproxy_http_request_duration_seconds_bucket_count
type: counter <br>
The total number of requests is the number of requests in that histogram
<pre>
<br>
<br>
=== haproxy_http_request_duration_seconds_sum === haproxy_http_request_duration_seconds_sum
type: counter <br>
The sum of the response times in a given histogram. Based on the previous example, there were a total of 5 requests and kserving the summ serving time added up to was 13 s.
<pre>
= OpenShift router + rsyslog =
Starting with OpenShift 3.11, it is possible to define fire up a router that will openShfit automatically launch contain a side car rsyslog container in the router pod and configure HAproxy to send logs to the rsyslog server via an emptyDir volume , which writes them to stdout by default. The configuration of rsyslog is in a configMap.
[[File: ClipCapIt-190810-164907.PNG]]
<br>
You can create a router with syslogserver using the '''--extended-logging''' switch with in the command '''' 'oc adm router'''command.
<pre>
# oc adm router myrouter --extended-logging -n default
<br>
Turn on the debug level loging in HAproxy:
<pre>
# oc set env dc / myrouter ROUTER_LOG_LEVEL = debug -n default
You can see that the '''/ var / lib / rsyslog /''' folder is mounted in both containers. You will create the The rsyslog.sock file will be created here in your HAproxy configuration file.
<br>
<br>
=== router container ===
When we enter the router container, we can see that the configuration has already been lickedmodified:
<pre>
# kubectl exec -it myrouter-2-bps5v / bin / bash -n default -c router
<br>
If you want to reconfigure rsyslog to send logs to eg e.g, logstash then you only need to rewrite the configMap. By default, it rsyslog only writes to stdout what you get.
<pre>
# kubectl get cm rsyslog-config -n default -o yaml