prometheus metrics to elasticsearch with elastic beat on kubernetes, ECK
When monitoring kubernetes workloads, most of the time you go for prometheus / grafana, because:
- small footprint, a few GB and CPUs
- integration with CRD ServiceMonitor, PrometheusRule
- kube-prometheus-stack Helm chart
The concept of https://github.com/prometheus-operator/prometheus-operator is simple and powerful, a Prometheus and Alertmanager instance configured by kubernetes CR ServiceMonitor, PrometheusRule.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: dev-staging-lifecycle-kube-state-metrics
spec:
endpoints:
- honorLabels: true
port: http
jobLabel: app.kubernetes.io/name
selector:
matchLabels:
app.kubernetes.io/instance: dev-staging-lifecycle
app.kubernetes.io/name: kube-state-metrics
This serviceMonitor object will instruct prometheus-operator to configure prometheus to scrap the service behind labels selector, dont need to restart or deal with prometheus it-self, this is operator magic ^^
What about using elasticsearch / kibana ?
We install elastic Cloud Kuberneters aka ECK https://www.elastic.co/guide/en/cloud-on-k8s/2.8/index.html
this allow to deploy easily elasticsearch, kibana and metricbeat.
Sadly ECK is not offering the same behavior than prometheus operator: no hot configuration, I have created an issue about it, please +1 https://github.com/elastic/cloud-on-k8s/issues/7100
So let’s do it manually !
Deploy metricbeat to scrap prometheus metrics endpoint and send data to elasticsearch with ECK
Here I am supposing you have already an elasticsearch + kibana running, so let’s deploy a metricbeat to scrap our prometheus endpoint.
We use prometheus module from https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-prometheus.html which is installed by default.
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: prom-scrap
spec:
type: metricbeat
version: 8.8.1
config:
logging:
json: true
level: info
metricbeat:
modules:
# prometheus endpoints definition
- module: prometheus
period: 30s
metricsets: ["collector"]
hosts: ["kube-state-metrics.monitoring.svc:9090"]
metrics_path: /metrics
service:
name: dev-staging-lifecycle
# Classic configuration, or you can use elasticsearchRef field
output.elasticsearch:
allow_older_versions: true
hosts:
- '${ELASTICSEARCH_URL}'
password: '${ELASTICSEARCH_PASSWORD}'
username: '${ELASTICSEARCH_USERNAME}'
# default output index
index: 'prom-metrics'
processors:
- add_cloud_metadata: {}
- add_host_metadata:
netinfo.enabled: false
setup:
dashboards.enabled: false
template.enabled: false
deployment:
podTemplate:
spec:
automountServiceAccountToken: true
containers:
- args:
- '-e'
- '-c'
- /etc/beat.yml
# env from secret
envFrom:
- secretRef:
name: beat-es-connection
name: metricbeat
resources:
limits:
cpu: 2000m
memory: 128Mi
requests:
cpu: 50m
memory: 128Mi
# security best-practice
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/metricbeat/logs
name: beat-logs
- mountPath: /usr/share/metricbeat/data
name: beat-data
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: false
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: beat-data
- emptyDir: {}
name: beat-logs
This will give following data:
Pretty cool for a single metrics endpoint!
The lazy way: scrap all prometheus data into elasticsearch
As you have seen, we must write for all prometheus endpoints a elastic beat module, as I use ServiceMonitor already, my prometheus is full of data, let’s replicate it into elasticsearch, to have all metrics!
Prometheus “remote_write” feature instruct prometheus to send data received to an URL (here the beat):
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
...
spec:
...
config:
...
metricbeat:
modules:
# Classic configuration, or you can use elasticsearchRef field
output.elasticsearch:
...
processors:
...
deployment:
podTemplate:
spec:
...
containers:
- ....
# declare a listen port
ports:
- containerPort: 9201
name: prometheus
protocol: TCP
# increase to 2Gi memory
resources:
limits:
cpu: '2'
memory: 2Gi
requests:
cpu: 50m
memory: 2Gi
...
also we must define a Service
apiVersion: v1
kind: Service
metadata:
name: beat-prom
spec:
selector:
beat.k8s.elastic.co/name: dev-staging-lifecycle
ports:
- name: prometheus
protocol: TCP
port: 9201
targetPort: prometheus
Now let’s configure prometheus, we use of course the Helm chart kube-prometheus-stack, so just edit values.yaml: https://github.com/prometheus-community/helm-charts/blob/kube-prometheus-stack-48.3.1/charts/kube-prometheus-stack/values.yaml#L3193
Et voilà!
— > Of course, you dont want ALL prom metrics in a SINGLE elasticsearch index, because this will create more than 1000 fields, I recommend to select metrics you want to send, from prometheus remote_write config, or use multiple indices.