Using AI and automation to build resiliency into Bitbucket pipelines

Everyone knows the buzzwords: Automation, Cloud native, DevOps, and NoOps. Yet industry surveys like State of DevOps Report, DORA DevOps Maturity Report and Dynatrace’s Autonomous Cloud Management (ACM) Survey), confirm that most companies are still falling short of the highest performers by magnitudes. Too many enterprises are slowed by manual software deployments and operational tasks. They don’t have the right data to make better decisions, faster – or are still trapped in silos where biz, dev, and ops are working in isolation instead of collaborating on measurable goals.

In a recent collaboration between Atlassian and Dynatrace, we explored how automation could build resiliency into cloud pipelines, increase the quality of software performance in production, and lower operational risk. We believe companies don’t need to trade-off between speed and safety: automation can change the equation.

To show how easy it is to start breaking down the communication silos and automate performance verifications, below explores two ways for you to integrate Dynatrace into your continuous delivery pipelines using Bitbucket pipes.

Dynatrace and Bitbucket pipes

  • Dynatrace provides software intelligence to simplify enterprise cloud complexity and accelerate digital transformation. With AI and complete automation, our all-in-one platform provides answers, not just data, about the performance of applications, the underlying infrastructure and the experience of all users. Rich Dynatrace APIs allow the Dynatrace Platform to integrate your DevOps tool chain and collaborate engineering and production for excelled auto-remediation and auto-remediation-as-code — on any platform, cloud or stack — without human intervention.
  • Bitbucket Pipelines is CI/CD for Bitbucket Cloud that’s integrated in the UI and sits alongside your repositories, making it easy for teams to get up and running building, testing, and deploying their code.
  • Bitbucket pipes provide a simple way to configure a pipeline using preassembled Docker images.

Below is a representative Bitbucket build and release pipeline that includes typical tasks along with the addition of the Dynatrace push event and SLO/SLI keptn performance quality gates.

flow.png

  1. Service Level Objective and Service Level Indicator files are checked into Bitbucket
  2. Application code is built as a Docker image and pushed to a Docker registry
  3. Code is deployed to the application under test
  4. Dynatrace Deployment event is pushed to provide build context
  5. Performance tests run and monitoring metrics are collected by Dynatrace
  6. Dynatrace Annotation event is pushed with test start/stop times
  7. Keptn Quality Gate is called. Results will be pass, warn, or fail.  A fail will stop the pipeline allowing for triage of the issue

Dynatrace information events

Dynatrace information events enable Bitbucket pipelines, to provide additional details for Dynatrace. This allows Dynatrace to correlate things like a successful deployment with other things like an uptick in resource consumption, a drop in performance, or an outright crash.  

Information events speed up triage. They provide context about what’s happening within the application and to the application via the continuous delivery pipeline. People can more quickly understand what’s happening when they understand the dimensions of production events, load testing, and deployment. With integration between Dynatrace and Bitbucket Pipelines can see the system, job, and team responsible.

Below we can see a deployment event for a “front-end” service, linking right back to the specific Bitbucket pipeline that performed the deployment.

event.png

These same events inform the Dynatrace AI engine, Davis, and if related the root-cause, they are brought into the Dynatrace problem card.  Below we can see both the information (#1) and deployment events (#2) for a service that was the root cause to a problem the Dynatrace AI engine, Davis, determined.

problem.png

Just by pasting a few key pieces of information, as shown below, the Dynatrace push event Bitbucket pipe will send an event to Dynatrace with the initiator and hyperlinks. This data provided will help teams dramatically reduce the time to identify the right owners of problems, in turn, reducing mean time to resolution (MTTR).

yaml.png

Automated performance validation using Keptn Quality Gates

Enterprise operations are complex with millions of lines of code, hundreds of connected services, and hybrid or multi-cloud environments. One break can mean serious implications for people, business, and the brand. To avoid this, you need to understand how your applications are performing in real-time, to help clarify the impact for the end user, system components, and the root-cause of issues. Automatic scoring and grading of builds, artifacts, test results, feature flags, canary deployments, and even full-blown releases must be part of every modern progressive delivery process.

Dynatrace is leading an open source project called Keptn that defines a quality gate specification based on Service Level Objectives (SLO) and Service Level Indicators (SLI) and a quality gate service for execution. This Keptn service automatically scores and grades builds, artifacts, test results, feature flags, canary deployments, and even full-blown releases within the delivery process. Bad code or configuration is identified and stopped before users see the impact.  The picture below shows the concept of build-over-build evaluation for example SLIs and SLOs.

slo.png

A Keptn quality gate acts more like a test than monitoring, in the sense that it returns a pass/fail result as opposed to raising alarms by evaluating monitoring and architectural changes against explicit definition of service level objectives and build-to-build runs.  By adding a Bitbucket pipe “wrapper” to the Keptn quality gate service, the pipeline will return a pass/fail result against an explicit definition of service level objectives. By pulling SLO reporting into the continuous delivery pipeline, software teams can make production changes at the pace of the Internet while increasing the safety of those changes.

Try it out today

To make it easier for any team to take advantage of these workflows, we have published these two pipes as "official" Bitbucket pipes. These make it easy for any team to leverage Dynatrace, in a powerful compliment to Altassian Bitbucket pipelines.

official-pipe.png

Dynatrace automatically layers AI-powered monitoring that far exceeds alerts on individual metrics. It treats monitoring configuration as code enabling making data consumable as a self-service for development, quality engineers, architects, DevOps, SRE, ops, and business teams. Dynatrace has many capabilities and out-of-the-box features that support performance engineering and test. Here is a shortlist to get you started.

  1. Sign up for the 15-day free Dynatrace Trial
  2. Install the Dynatrace OneAgent to gather metrics and feed the Dynatrace AI-powered problem causation engine that automatically shows impacted users, system, and root cause during testing
  3. Read more about how to use Dynatrace push event pipe here
  4. Read more about how to use Keptn Quality Gate pipes here
  5. Watch our recent Keptn Community meeting video where we demo a pipeline using these pipes and a recent Dynatrace Performance Clinic video where we discussed the topic of automate scoring and analysis Dynatrace API and Keptn Quality Gate Service

2 comments

Comment

Log in or Sign up to comment
Gonchik Tsymzhitov
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
January 7, 2021

Thank for your article :) 

pallat August 31, 2021

Try it out today

To make it easier for any team to take advantage of these workflows, we have published these two pipes as "official" Bitbucket pipes. These make it easy for any team to leverage Dynatrace, in a powerful compliment to Altassian Bitbucket pipelines.

the wrong spelling `Altassian`

Please delete this comment after fixing it.

TAGS
AUG Leaders

Atlassian Community Events