Are you in the loop? Keep up with the latest by making sure you're subscribed to Community Announcements. Just click Watch and select Articles.

Showing results forย 
Search instead forย 
Did you mean:ย 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Alertmanager template problem with Prometheus rule

Hey folks,
I've alertmanager integrated with Opsgenie working well, however I'm facing a problem when my annotations match more than one record fired.

For example, I have a PrometheusRule to monitor kubernetes pods in crash/pending state and if more than one pod is having problems, the description annotation below does not appear on opsgenie, only runbook and dashboard, if only one pod is having problems, I can see the description normally on opsgenie.

does not appear on opsgenie if more than 1 pod is firing.

description: Pod {{ $labels.pod }} in the namespace {{ $labels.namespace }}

I guess is something related to arrays, not sure where and how to fix it.

Alertmanager template config


 global: {}
 - name: opsgenie
 - api_key: ${opsgenie_key}
 description: |-
 {{ range .CommonAnnotations.SortedPairs }}
 - {{ .Name }} = {{ .Value }}
 {{- end }}
 message: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .GroupLabels.alertname }}'
 priority: '{{ if .GroupLabels.priority }}{{ .GroupLabels.priority }}{{ else }}p2{{ end }}'
 - name: '{{ if .GroupLabels.responders }}{{ .GroupLabels.responders }}{{ else }}platform{{ end }}'
 type: team

Prometheus Rule

 kind: PrometheusRule
 app: kube-prometheus-stack
 release: kube-prometheus-stack
 name: kube-pod-crash-looping-platform
 namespace: platform
 - name: eks
 - alert: KubePodCrashLooping
 description: Pod {{ $labels.pod }} in the namespace {{ $labels.namespace }}
 dashboard: https://my-grafana-url
 expr: max_over_time(kube_pod_container_status_waiting_reason{pod=~"liftbridge-.*|nats-.*|redis-.*|consul-server-.*|vault-0|vault-1|vault-2|vault-agent-injector-.*|argocd-.*|argo-rollouts-.*|coredns-.*|istio-.*|istiod-.*|hubbble-.*|external-.*|keda-.*", reason="CrashLoopBackOff"}[10m]) >= 1
 for: 10m
 env: dev
 priority: p2
 responders: platform

1 answer

1 accepted

0 votes
Answer accepted
Darryl Lee
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Jun 29, 2022

Hi @daniel.rosa,

This is Darryl. I am here to help. ๐Ÿ˜ƒ

Understand that you would like to understand why some details from Prometheus AlertManager were rendered successfully on Opsgenie alerts.

In order to dive deeper into the logs, we will need your consent to access your Opsgenie and it would be much more efficient to communicate over a support request.

Please consider raising a support request to our team via this link.


Kind regards,
Darryl Lee
Support Engineer, Atlassian

Suggest an answer

Log in or Sign up to answer
AUG Leaders

Atlassian Community Events