7,540
edits
Changes
→Collecting metrics from the access logs
<br>
'''Online grok testers'''<br>
There are several online grok testing tools. These can be help to compile the required grok expression very effectively. Try this: https://grokdebug.herokuapp.com/
:[[File:ClipCapIt-190808-170333.PNG]]
<br>
The second problem is that they are all based on ubuntu image, that makes it very difficult to get rsyslog to log to stdout(ubunto doesn't support loggin to stdout), which required by the Kubernetets centralized log collector system for retaining HAproxy logs. We are going to port the original grok Dockerfile to '''centos 7''' base image and will add rsyslog installation to the new image.
<br>
All necessary files are available on under my git-hub: https://github.com/berkiadam/haproxy-metrics/tree/master/grok-exporter-centos <br>I also created an ubuntu based solution, which is an extension of the original docker-hub solutionversion, which can also be found on git-hub in the '''grok-exporter-ubuntu folder'''. In the rest of this chapter, we are going to use the centOS version.
<br>
<br>
=== Dockerfile ===
We will modify the official '''palobo/grok_exporter''' Dockerfile, we will extend it with the rsyslog installation and port it to centos: https://github.com/berkiadam/haproxy-metrics/tree/master/grok- CentOS-exporter
<br>
➲[[File:Grok-exporter-docker-build.zip|Download all files required for the build of the Docker image]]
CMD sh -c "nohup /usr/sbin/rsyslogd -i ${PID_DIR}/pid -n &" && ./grok_exporter -config /grok/config.yml
</source>
{{note | It is important to use grok-exporter version 0.2.7 or higher of the grok-exporter, as we use grok-functions were introduced in the metrics definitionsthis version}}
<br>
<br>
The '''rsyslog.conf''' file must be accompanied by include at least the following, which allows you to receive logos that enables receiving logs on port 514 on over both UDP and TCP (see zip above for details), and that write all . The logs are written to stdout and to /var/log/messages.
<pre>
$ModLoad omstdout.so
=== Local build and local test ===
First, we will build the docker image with the local docker daemon so that we can run it locally for testing. Later we will build this it directly on the minishfit VM, since we will only be able to upload it to the minishfit docker registry from therethe VM. Since , at the and, as we will be uploading upload the image to a remote (not local) docker repository, it is important to follow the naming conventions:
<pre>
<repo URL>: <repo port> / <namespace> / <image-name>: <tag>
</pre>
We will upload the image to the docker registry running on the minishift, so it is important to specify the address and port of the minishfit-docker registry and the OpenShift namespace where the image will be placeddeployed.
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
The resulting image can be easily tested by running a native, local dockerlocally. Create a haproxy test log file ('''haproxy.log''') with the following content in it. This will be processed by the grok-exporterduring the test, as if it had been provided by haproxy.
<pre>
Aug 6 20:53:30 192.168.122.223 haproxy[39]: 192.168.42.1:50708 [06/Aug/2019:20:53:30.267] public be_edge_http:mynamespace:test-app-service/pod:test-app-57574c8466-qbtg8:test-app-service:172.17.0.12:8080 1/0/0/321/321 200 135 - - --NI 2/2/0/1/0 0/0 "GET /test/slowresponse/1 HTTP/1.1"
<br>
Put the grok config file '''config.yml''' created specified above in the same folder. In the config.yml file, change the input.path to '''/grok/haproxy.log''' so that where the grok-exporter processes our test log filecontent is. Then start it the container with a following '''docker run' 'command:
<pre>
# docker run -d -p 9144: 9144 -p 514: 514 -v $ (pwd) /config.yml:/etc/grok_exporter/config.yml -v $ (pwd) /haproxy.log:/grok/haproxy. log --name grok 172.30.1.1:5000/default/grok_exporter:1.1.0
<br>
<pre>
# docker logs grok
<br>
Metrics are then available in the browser at http: // localhost: 9144 / metrics:
<pre>
...
<br>
<br>
As a second step, verify that the '''rsyslog' 'running in the docker container can receive these remote log messages. To do this, first enter the container with the exec command and look for check the content of the /var/log/messages file:in f (follow) mode.
<pre>
# docker exec -it grok /bin/bash
<br>
Now, from on the mother host machine, use the '''logger''' command to send a log message to the container running rsyslog server on port 514:
<pre>
# logger -n localhost -P 514 -T "this is the message"
<br>
=== Remote build ===
We would like have to to upload the completed docker our custom grok Docker image to the minishfit's own registry. To do thisso, you need to build the image with the minishfit VM 's local docker daemon, since you can only access the minishfit registry from therethe VM so uploading images is only possible from the VMs local registry. <br>Details at can be found here: [[Openshift_basics # Minishfit_docker_registry | ➲Image push to minishift docker registriy]]
<pre>
# oc login -u system: admin
# oc adm policy add-cluster-role-to-user cluster-admin admin --as = system: admin
cluster role "cluster-admin" added: "admin"
</Pre>
{{note | If we get this error '''Error from server (NotFound): the server could not find the requested resource'' ', it probably means that our oc client''' oc '''is older than OpenShift version}}
Build it in the image on the minishfit VM as well:
<pre>
# docker build -t 172.30.1.1:5000/default/grok_exporter:1.1.0.
</Pre>
<pre>
# docker push 172.30.1.1:5000/default/grok_exporter:1.1.0
<br>
== Kubernet Required Kubernetes objects ==
For grokthe HAproxy-exporter we will create a serviceAccount, a deployment, a service and a comifMap where we will store the grok-exporter configuration. In addition, we will modify extend the object '''SecurityContextConstraintsanyuid''' named anyuidSecurityContextConstraints object, because the rsyslog server requires the grok-exporter container to run in privileged mode.
* haproxy-exporter service account
<br>
<br>
=== Create the ServiceAccount ===The haproxy-exporter needs its own serviceAccount, which we will allow to run the privileged (root) container. This is what the rsyslog server needs.
<pre>
kind: ServiceAccount
metadata:
name: haproxy-exporter
namespace: default
secrets:
- name: haproxy-exporter-token-8svkx
===Objektumok definiálásaAddition Kubernetes objects===
<br>
<br>
Because of the Haproxy-exporter runs an rsyslog server in grok-exporter, it is important that the its container runs must be run in privileged mode. To do this, you need to add the serviceAcccount belonging to the haproxyHAproxy-exporter serviceAcccount to the SCC named ''' anyuid '' to enable running on behalf of the root'. So you we don't need the '''privileged '' SCC because the container principle wants to start as root, we don't need to force it by OpenShift configuration, we just have to allow it. OtherwiseWithout running as root, rsyslog will not be able to create sockets.{{warning | Admin admin rolebindg for developer user mynamespace is not enough to handle SCCs. You need to log in as an admin to do this: oc login -u system: admin}}
<br><br>
<br>
- system: serviceaccount: <namespace>: <serviceAccount>
<br>
Since this is an existing '''scc''' and we just want to make apply some minor changes to it, we can edit it locally'on the fly' with the 'oc edit' command:
<pre>
# oc edit scc anyuid
<br>
=== create the objects ===
<pre>
<br>
=== Testing === Testing
Find the haproxy-exporter pod and look at check logs of the pod log:
<pre>
# kubectl logs haproxy-exporter-744d84f5df-9fj9m -n default
Then enter the container and test the rsyslog functionserver:
<pre>
# kubectl exec -it haproxy-exporter-647d7dfcdf-gbgrg / bin / bash -n default
</pre>
Now, let's list the contents of the / var / log / messages folder:
<pre>
# cat messages
</pre>
Exit the container and retrieve the pod logs again to see if the log has been sent to stdoutas well:
<pre>
# kubectl logs haproxy-exporter-647d7dfcdf-gbgrg -n default
=== Setting the environment variables ===For In the HAproxyrouters, we will set the address of the rsyslog server running in the haporxy-exporter pod via our environment variablevariables. To do this, we Let's check first list the haproxy-exporter service.
<pre>
# kubectl get svc -n default
HAproxy stores the rsyslog server address in the environment variable '''ROUTER_SYSLOG_ADDRESS''' (part of Deployment)environment variable. We can rewrite overwrite this at runtime with the command '''oc set env'''command. After rewriting the variable, the pod will restart automatically.
<pre>
# oc set env dc / myrouter ROUTER_SYSLOG_ADDRESS = 172.30.213.183 -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{note | In minishift, in the router container containers the name resolution does not work with name resolution for servicesKubernetes service names, because it is not doesn't use the Kubernetes cluster DNS server address but the minishfit VM. Therefore, all you have to do is enter the service's IP address instead of its name. In OpenShift, we enter have to use the name of the service}}
<pre>
# oc set env dc / myrouter ROUTER_LOG_LEVEL = debug -n default
deploymentconfig.apps.openshift.io/myrouter updated
</pre>
{{warning | Performance test must be carried out to see how much is the extra load a haproxy has when running the haproxy in debug mode}}
<br>
As a result of modifying the above modification of the two environment variables, the configuration of HAproxy in the router container in file '''/var/lib/haproxy/conf/haproxy.config''' has changed to:
<pre>
log 172.30.82.232 local1 debug
</pre>
<br>
<br>
<br>
=== Testing the rsyslog server ===Generate some traffic through haproxy, then go back to the haproxy-exporter container and list the contents content of the messages file.
<pre>
# kubectl exec -it haproxy-exporter-744d84f5df-9fj9m / bin / bash -n default
#
# tail -f / var / log / messages
Aug 9 12:52:17 192.168.122.223 haproxy [24]: Proxy fe_sni stopped (FE: 0 conns, BE: 0 conns).
Aug 9 12:52:17 192.168.122.223 haproxy [32]: 127.0.0.1:43720 [09 / Aug / 2019: 12: 52: 17.361] public openshift_default / <NOSRV> 1 / -1 / -1 / -1 / 0 503 3278 - - SC-- 1/1/0/0/0 0/0 "HEAD / HTTP / 1.1"
</pre>
</pre>
=== Testing the grok-exporter component ===Please download open the grokgork-exporter metrics at http: // <pod IP>: 9144 / metrics. Either You can open this URL either in the haproxy-exporter pod itself with a on localhost call or in any other pod using the haporxy-exporter pod 's IP address. In the example below, I enter the test-app. We need have to see the '''haproxy_http_request_duration_seconds_bucket ''' histogram among the metrics.
<pre>
# kubectl exec -it test-app-57574c8466-qbtg8 / bin / bash -n mynamespace
=== Pod Level Data Collection ===
We want the haproxy-exporter pods to be scalable. This requires that the prometheus Prometheus does not retrieve scrape the metrics through the service (because it does service loadbalancing), but addresses from the pods directly. To do this, the prometheus So Prometheus must get through the Kubernetes API query the '''Endpoint''' of definition assigned to the haproxy-epxporterexporter service from the Kubernetes API, which contains the list of ip IP addresses for the service's podcastspods. We will use the '''kubernetes_sd_configs'' element of prometheusto achieve his. (This requires that Prometheus to be able to communicate with the Kubernetes API. For details, see [[Prometheus_on_Kubernetes]])
When using '''kubernetes_sd_configs''' we Prometheus always get gets a list of a specific Kubernetes object objects from the server API (node, service, endpoints, pod) and then look up the resource it identifies those resources according to its configuration from which we want it wants to collect the metrics. We do this by going to In the '''' relabel_configs''' section and then applying of Prometheus configuration we will define filter conditions to for identifying the tags of the given Kubernetes resourceneeded resources. In this case, we want to find the endpoint belonging to the haproxy-exporterservice, because it allows Prometheus to find all the pods for the service. So, based on the tagsKubernetes labels, we will want to find the endpoint that is called '''' 'haproxy-exporter-service''' and also has a port called '''metrics'' port through which Prometheus can retrieve scrape the metrics. The In Prometheus, the default scrape URL is '''/ metrics''', so you we don't have to define it separately, it is used by grok-exporterimplicitly.
<pre>
# kubectl get Endpoints haproxy-exporter-service -n default -o yaml
We look are looking for two tags labels in the Endpoints list:
* __meta_kubernetes_endpoint_port_name: metrics -> 9144
* __meta_kubernetes_service_name: haproxy-exporter-service
<br>
The config-map that describes proetheus.yaml, that is, prometheus.yaml, should be completed extended with the following:
<source lang = "C ++">
- job_name: haproxy-exporter
</pre>
<pre>
# kubectl logs -f -c prometheus-server prometheus-server-75c9d576c9-gjlcr -n mynamespace
<br>
Then, on the http://mon.192.168.42.185.nip.io/targets screen, verify that Prometheus reaches can scrape the haproxy-exporter target:
[[File: ClipCapIt-190809-164445.PNG]]
<br>
=== scaling haproxy-exporter scaling ===
<pre>
<br>
== Metric varieties types ==
=== haproxy_http_request_duration_seconds_bucket === haproxy_http_request_duration_seconds_bucket
type: histogram
<br>
=== haproxy_http_request_duration_seconds_bucket_count === haproxy_http_request_duration_seconds_bucket_count
type: counter <br>
The total number of requests is the number of requests in that histogram
<pre>
<br>
<br>
=== haproxy_http_request_duration_seconds_sum === haproxy_http_request_duration_seconds_sum
type: counter <br>
The sum of the response times in a given histogram. Based on the previous example, there were a total of 5 requests and kserving the summ serving time added up to was 13 s.
<pre>