如何显示使用Golang客户端库从Kubernetes中运行的所有Pod中捕获的Prometheus中的自定义应用程序指标
I am trying to get some custom application metrics captured in golang using the prometheus client library to show up in Prometheus.
I have the following working:
-
I have a go application which is exposing metrics on localhost:8080/metrics as described in this article:
https://godoc.org/github.com/prometheus/client_golang/prometheus
-
I have a kubernates minikube running which has Prometheus, Grafana and AlertManager running using the operator from this article:
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus
I created a docker image for my go app, when I run it and go to localhost:8080/metrics I can see the prometheus metrics showing up in a browser.
I use the following pod.yaml to deploy my docker image to a pod in k8s
apiVersion: v1 kind: Pod metadata: name: my-app-pod labels: zone: prod version: v1 annotations: prometheus.io/scrape: 'true' prometheus.io/port: '8080' spec: containers: - name: my-container image: name/my-app:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8080
- If I connect to my pod using:
kubectl exec -it my-app-pod -- /bin/bash
then do wget on "localhost:8080/metrics", I can see my metrics
So far so good, here is where I am hitting a wall. I could have multiple pods running this same image. I want to expose all the images to prometheus as targets. How do I configure my pods so that they show up in prometheus so I can report on my custom metrics?
Thanks for any help offered!
我正尝试使用prometheus客户端库在golang中捕获一些自定义应用程序指标,以显示在Prometheus中。 / p>
我有以下工作: p>
-
我有一个go应用程序,该应用程序在localhost:8080 / metrics上公开指标 如本文所述: p>
https:/ /godoc.org/github.com/prometheus/client_golang/prometheus p> li>
-
我有一个运行的kubernates minikube,其中有Prometheus,Grafana和AlertManager使用 本文的运算符: p>
https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus p> li>
-
我创建了一个docker映像 对于我的go应用程序,当我运行它并转到localhost:8080 / metrics时,我可以 ee在浏览器中显示的普罗米修斯指标。 p> li>
-
我使用以下pod.yaml将docker映像部署到k8s中的pod中 p> li > ul>
api版本:v1 kind:Pod metadata: 名称:my-app-pod 标签: 区域:产品 版本:v1 注释: prometheus.io/scrape:'true' prometheus.io/port:'8080' spec: 容器: -名称:my-container 图像:名称/我的应用程序:最新 imagePullPolicy:IfNotPresent 端口: -containerPort:8080 code> pre> blockquote>
- 如果我使用以下方式连接到我的Pod: li>
ul>
kubectl exec -it my-app-pod-/ bin / bash p > blockquote>
然后在“ localhost:8080 / metrics”上执行wget,我可以看到我的指标 p>
到目前为止,很好,这是哪里 我撞墙了。 我可以让多个Pod运行同一张图片。 我想将所有图像公开给普罗米修斯作为目标。 如何配置我的Pod,以便它们显示在Prometheus中,以便我可以报告自定义指标? p>
感谢您提供的任何帮助! p> div>
You need 2 things:
- a ServiceMonitor for the Prometheus Operator, which specifies which services will be scraped for metrics
- a Service which matches the ServiceMonitor and points to your pods
There is an example in the docs over here: https://coreos.com/operators/prometheus/docs/latest/user-guides/running-exporters.html
Can you share the prometheus config that you are using to scrape the metrics. The config will control what all sources to scrape the metrics from. Here are a few links that you can refer to : https://groups.google.com/forum/#!searchin/prometheus-users/Application$20metrics$20monitoring$20of$20Kubernetes$20Pods%7Csort:relevance/prometheus-users/uNPl4nJX9yk/cSKEBqJlBwAJ
The kubernetes_sd_config directive can be used to discover all pods with a given tag. Your Prometheus.yml config file should have something like so:
- job_name: 'some-app'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: python-app
action: keep
The source label [__meta_kubernetes_pod_label_app] is basically using the Kubernetes api to look at pods that have a label of 'app' and whose value is captured by the regex expression, given on the line below (in this case, matching 'python-app').
Once you've done this Prometheus will automatically discover the pods you want and start scraping the metrics from your app.
Hope that helps. You can follow blog post here for more detail.
Note: it is worth mentioning that at the time of writing, kubernetes_sd_config is still in beta. Thus breaking changes to configuration may occur in future releases.