You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Join now to unlock these features and more
We've set up Incoming Calls integration and the alerts informing of these work fine. they are auto-ack'ed if the call is answered and we can escalate if not until someone does acknowledge them.
However, once the call is ack'ed, there isn't much other activity generally needed. Is there a way to auto-close these alerts, say after a few hours of the acknowledgement?
@Donna Askew , our team does this type of automation externally to Opsgenie, with a script hitting the Alert API. We currently run a script to close all P5 alerts after X days of inactivity. This is a bit like what you want.
If we were doing this, we'd use the API and find any alerts that were Acknowledged, but had no activity for X hours, then issue a close on them.
Hi @Donna Askew ,
Unfortunately there isn't a way to auto-close ack'd alerts for something this specific.
Notification policies can be used to auto-close alerts after x-time, but they cannot apply only to alerts that are ack'd:
The notification policy above does two things;
1) Restarts notifications for Phone Call alerts after one hour. This would ensure the alert is not lost, and continues notifying users it's still open.
2) Auto-closes the alert after two hours.
For this policy to apply, the alert would need to add the team as a responder at creation. Under the Incoming Call's Advanced tab - you could add this team as a responder here:
One other thing to consider is if you add a team in the responder field of the Phone Alert action, the alert would be treated like any other alert, and notify the team's users based on what's configured under the On-call tab.
Typically we'll see customers configuring a specific routing rule + escalation for Phone Call alerts that provide a "buffer" before notifying users. The buffer allows for the caller to leave a voicemail - which in turn leaves the alert open. If no voicemail is left, the alert is closed and notifications are not sent to users:
Hope that helps, makes sense, etc. - but let us know if you have any questions about what's mentioned above.
Hi @Donna Askew ,
Incoming Call alerts work a bit differently than other alerts in Opsgenie. They are "global" integrations which are not assigned to a team, and instead you specify which team/escalation/schedule/user the calls will be forwarded to via the forward call to field.
This means when a call is answered, even if the forward call to field is set to a team, there will be no team as a responder on the alert. Since this is the case, there is no way to set-up a notification policy to auto close these alerts, as notification policies are configured within a team, and only apply to alerts that only go to that team.
We do have a bit of a "hacky" workaround that would allow for a team to be added to the incoming call alerts, and in turn allow for notification policies on that team to apply to these alerts, but there are some other non-ideal consequences of implementing this workaround.
Within your Incoming Call Integration, in the Create Alert action, you can add a team as a responder there.
This will add that team as a responder to all alerts created through this incoming call integration
You can then set-up a notification policy on the team to auto-close these alerts after X minutes.
However, having this team added as a responder to alerts can introduce some duplicate notifications of the alerts created through the incoming call integration.
This is because incoming call alerts already send "incoming call" notifications to any user who it tries calling (based on the forward call to) field. Adding this team as a responder will also trigger "New Alert" notifications to whoever is targeted via the routing rules on that team. So this can cause duplicate notifications for these alerts.
For further clarification and guidance on implementing this workaround, please contact support via support.atlassian.com and we can work with you to see if this workaround would be suitable for your use-case.