Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,457,842
Community Members
 
Community Events
176
Community Groups

How to set Opsgenie alert priority from Prometheus alert priority

Hello

I have the priority level in Prometheus set as 'severity' with values low,medium and critical. I tried using Opsgenie's 'Custom Value to Opsgenie priority' option with value {{severity.substring(0,2)}} but that does not seem to change the priority level of the Opsgenie alert. What am I missing here? Can someone please help me.

Thanks!

4 answers

Hi Folks,

 

Posting my configuration which works well with severity mapping to priority in opsgenie as follows:

 

critical => P1

warning => P2

info => P3

none => P4

 

Configuration for Alertmanager I used as follows:

global:
resolve_timeout: 1m
opsgenie_api_url: https://api.eu.opsgenie.com/
opsgenie_api_key: <my-top-secret-api-key>
receivers:
- opsgenie_configs:
- teams: <my-team-name>
priority: '{{ range .Alerts }}{{ if eq .Labels.severity "critical"}}P1{{else if eq .Labels.severity "warning"}}P2{{else if eq .Labels.severity "info"}}P3{{else}}P4{{end}}{{end}}'
name: opsgenie
route:
group_by: ['...']
receiver: opsgenie
repeat_interval: 5m

I tested this with Opsgenie free tier with Prometheus Alertmanager.

 

Please mind indentations while configuring.

 

Final Note: please input api-key, team name accordingly, also group_by etc fields are not optimised, I tested this with bare 4 alerts having specific severity what I mentioned above.

global:
resolve_timeout: 1m
opsgenie_api_url: https://api.eu.opsgenie.com/
opsgenie_api_key: "XXXXXXXXXX"
receivers:
- opsgenie_configs:
- responders:
- type: team
name: "<team name>"
priority: '{{ range .Alerts }}{{ if eq .Labels.severity "critical"}}P1{{else if eq .Labels.severity "warning"}}P2{{else if eq .Labels.severity "info"}}P3{{else}}P4{{end}}{{end}}'
name: opsgenie
route:
group_by: ['alertname','service']
receiver: opsgenie
repeat_interval: 5m

The above configuration worked for me i was working with a Atlassian Support, and we tried the above config it worked for us

The above will work, but only if if you only have one alert!  The problem is this line:

priority: '{{ range .Alerts }}{{ if eq .Labels.severity "critical"}}P1{{else if eq .Labels.severity "warning"}}P2{{else if eq .Labels.severity "info"}}P3{{else}}P4{{end}}{{end}}'


The "range" here will iterate over all alerts, so if you have more than one alert this will generate a priority like "P2P2", which opsgenie will reject:

level=error ts=2022-03-09T14:59:23.350Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=2 err="opsgenie/opsgenie[0]: notify retry canceled due to unrecoverable error after 1 attempts: unexpected status code 422: {\"message\":\"Request body is not processable. Please check the errors.\",\"errors\":{\"priority\":\"should be one of [ P1, P2, P3, P4, P5 ]\"},\"took\":0.001,\"requestId\":\"c4390ce3-f99c-4149-820d-1886421ef213\"}"


I'm using this configuration to group together alerts of the same priority and send them to opsgenie as a batch:

receivers:
- name: "null"
- name: "opsgenie - critical"
opsgenie_configs:
- priority: P1
- name: "opsgenie - warning"
opsgenie_configs:
- priority: P2
- name: "opsgenie"
opsgenie_configs:
- priority: P3
route:
receiver: "null"
group_by:
- job
routes:
# Alerts in this receiver are ignored.
- receiver: "null"
match:
alertname: Watchdog
- receiver: "opsgenie - critical"
match:
severity: "critical"
- receiver: "opsgenie - warning"
match:
severity: "warning"
- receiver: "opsgenie"
Like # people like this

My prometheus+alertmanager work fine with priorities!

Example:

in prometheus, i have this alert:


- alert: Instance_Free_Inodes
expr: (disk_inodes_used * 100 / disk_inodes_total) > 75
for: 5m
labels:
severity: page
alert_target: '{{ $labels.host }}'
annotations:
description: '{{ $labels.alias }} of job {{ $labels.job }} ({{ $labels.path }}) has ({{ $value | printf "%.2f" }})% used inodes for more than 5 minutes.'
summary: {{ $labels.alias }} using more than 75% inodes on {{ $labels.path }}
impact: Services may fail
action: Check the server disks and remove some files, if possible
dashboard: https://grafana.example.com/d/pjhLJOzmk/infrastructure-hosts-stats-3hours
priority: P3

Then in alertmanager:


- name: opsgenie_techops_team
  opsgenie_configs:
    - send_resolved: true
      http_config: {}
      api_key: <secret>>
      api_url: https://api.opsgenie.com/
      message: '{{ template "opsgenie.default.message" . }}'
      description: '{{ template "opsgenie.default.description" . }}'
      source: '{{ template "opsgenie.default.source" . }}'
      responders:
      - name: techops_team
         type: team
      priority: '{{ if .CommonAnnotations.priority }}{{ .CommonAnnotations.priority
}}{{ else }}P3{{ end }}'

 

 

This way i can setup the priority in the prometheus alert and it is pushed to opsgenie correctly

0 votes
Nick H Atlassian Team Nov 12, 2019

With Prometheus' Alert Manager config file you should be able to map the severity field with something customer like this:

priority: '{{ range .Alerts }}{{ if eq .Labels.severity "critical"}}P1{{else}}P3{{end}}{{end}}'

You can always change "P1" "P3" to their preference.

image.png

@Nick H Thanks! I tried that out and it maps  priority to P1 for critical alerts but I still wonder why does the one on Opsgenie portal did not work out. ie,  Setting 'Custom Value to Opsgenie priority' option with value {{severity.substring(0,2)}} 

Any update on this yet?

Nick H Atlassian Team Nov 18, 2019

I'm realizing Prometheus unfortunately is not an integration that can have a custom value to Opsgenie property configured such as a value {{severity.substring(0,2)}}:

priority image.jpg

 

So the best option is mapping this on the Alert Manager side like shown above. We also see customers using Alert Policies to parsed for the severity field, and modify/map the alert's priority this way, but these policies are only included in our Standard and Enterprise plans: https://docs.opsgenie.com/docs/global-policies#section-alert-policy

Like # people like this

apologies, but i'm not satisfied with your answer. If you say Prometheus is not an integration that can have a custom value to Opsgenie property, why is the option 'Custom Value to Opsgenie priority' even included in Prometheus integration page? If that cannot have values like '{{severity.substring(0,2)}}' what other values can it have? Can you please give me an example so that I understand the feature better?

Nick H Atlassian Team Nov 18, 2019

It's not an option (unless you are seeing something else with your integration):

priorty 2.jpg

Prometheus is using an API key to post to Opsgenie and because of this, looking for the values that are priority is (P1-5).

 

This is what I see and I would like to know  how to use it and what values does it take.Opsgenie.png

Nick H Atlassian Team Nov 19, 2019

You can use this option (Custom Value to Opsgenie Priority) with other webhook-based integrations. Unfortunately it does not work with our API integrations such as Prometheus.

Prometheus is an integration that sends notifications through our API (https://prometheus.io/docs/alerting/configuration/#opsgenie_config) and due to this - Opsgenie will be looking for exactly this value being sent from the alert API.

So for your use-case, you'd need to map this in Alert Manager (since you are on a Free plan with limited functionality to other ways of mapping this in Opsgenie).

0 votes
Nick H Atlassian Team Oct 25, 2019

Hi Ashman,

I think you are on the right track, but missing one character. Can you try this string instead:

P{{severity.substring(0,2)}}

The priority is touchy in this sense that it needs a hardcoded 'P' to map to our priorities.

Let us know if this does the trick! If not, I may need to take a look at what the logs are parsing or what the severity looks like.

@Nick H Thanks for the response. I tried what you suggested but that did not seem to work. Please take a look and let me know about the workaround. Thanks!

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events