Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Top questions answered from “Atlassian DevOps Talks: Fireside Chat with the CICD Industry”


Thank you for joining our webinar, Atlassian DevOps Talks: Fireside Chat with the CICD Industry. We hope you enjoyed it. If you missed it, want to share it with your team, or watch it again - here's the link to watch on-demand.

We’ve compiled the most commonly asked questions & shared our team's answers below.

We know you’re hungry for resources so here’s what we shared during the webinar (with a few extras):


Getting started

How do you distinguish between CI and CD processes?

Traditionally CI = Code to Artifact and CD = Artifact to Prod/Customer

I am new to CI/CD and would like to learn about it. Where and how should I start learning about CICD?

I would recommend the book "Continuous Delivery" by Jez Humble and Dave Farley. Or, have a look back at this webinar on CI/CD:

Where do you begin if there is no pipeline?

Start small and iterate. Whenever you set up a new project, your first action should be to create a pipeline with simple unit tests. As the project becomes more complex or requirements enter the domain of CICD (ex: must be deployed on a daily basis), add behaviors to the pipeline.

How do you share pipelines between similarly structured applications?

You can copy and paste the Bitbucket Pipelines .yml files and make minor changes as needed.

What would be a good process to build a DevOps pipeline for, or a good starting point to start CICD?

Any development process that is static, tedious, and/or prone to human error - running tests (tedious), updating a release image (static), and deploying an application to a marketplace/registry (prone to human error)

Are there any available life-like working use-case demos available to get to know what elements are working together and how in an optimal environment? This demonstrates how to manage an image classification application called ImageLabeller using Atlassian Open Devops.

What is a feature flag? 

How can feature flags help with feature versioning, compared to rolling back?

Automated testing for code security and quality 

Where should the various types of tests be run in a CICD world (unit, integration, end-to-end, post-deployment verification)? 

Ideally unit tests should be run on every push. More expensive tests that fall under the family of integration tests should be run before a merge or release occurs. All of this falls under CI, with the exception of post-deployment tests.

How can security be impacted by automation? In both good and bad ways?

Sensitive values need to be protected. API keys and other tokens need to be requested and used. Code requires checking. Packages should be validated and verified. It's a tough problem! The good news is that the CICD ecosystem is full of awesome solutions to help you.

Any tips on balancing CICD/automation with human review?

CICD is essentially an automation of your development process. There are some parts of that process that will be better (more reliable, consistent, etc.) as automations, and there are some that will fundamentally require human intervention. Identifying your biggest pain points is key.

We are planning to implement Shift-Left in QA Automation - when do we need to run our Smoke tests in a CICD pipeline?

Unit Tests are traditionally run in the CI/Build process, and Smoke Tests are traditionally run in CD as the artifact promotes to the QA environment.

Given that security is so important and inherent in code/IAC, what's the best way to build security of product/deployment into CICD? And what are good tools to use in the IDE to even prevent these "bugs" from getting into the pipeline in the first place?

Start small and grow from there. There are many vendors in this space (e.g. Snyk). Some of these vendors specialize in code; others focus on supply chain. It's a big space. Vendors like Terraform and Pulumi have solid offerings in this space. Bugs are impossible to avoid. Do the best you can and ensure you have rich telemetry and observability in your application for when things go wrong.

How and where in the pipelines should we be doing DevSecOps ?

Everywhere! In an alternate reality, DevSecOps might have been called “Continuous Security,” with a key idea being that security is everyone’s responsibility so it needs to be included in all aspects of development and operations, not just an afterthought. As for how, check out this article on incorporating some popular security tooling into your process:

Do you have advice on how and where tests can/should be run in a CICD setting?

A good starting point would be:

  • Run unit tests and integration tests on push into feature branches
  • Run end-to-end system tests after deployment to Test and Staging environments
  • Run unit tests and integration tests on PR merge to mainline
  • Run end to end system tests after deployment to all Production environments


Any hints on how to make life easier when dealing with tasks that span multiple services and deploy orchestration? Prod is relatively easy to deal with semi-manually, but the real pain is with QA environments and deployment on-demand.

Yes, managing so many environments can become quite a chore. And, if not done well, then configuration drift (when the environments don't match) can cause a lot of problems, like flaky CICD pipelines. There are a range of tools that can help, but the general domain is "configuration management." A popular style is to encapsulate infrastructure configuration with containers and propagate change by moving containers through the environments.

Do you have any videos or other documentation about deploying multiple microservices within a single repo?

Atlassian doesn't deploy multiple web services from a single repository, but you can consult this Support resource for more information:


What is the best way to include all developers in the DevOps culture of a project so that it’s not the dev handling everything?

We have to empower each team to work with the systems they use, not just shift everything left to the developer. Allow security to create the governance using something like OPA. Let the infrastructure teams define the different infrastructures that can be used and allow them to interact with the systems they are used to, e.g. Terraform. Let the DevOps team build the pipelines that meet security/quality needs, and the dev shouldn't even see the tooling unless something goes wrong, in which they should get all the information in the system to solve the problem.

What suggestions do you have for championing the message of pipeline automation within your organization?

Identify your pain points. How many steps are involved in getting your code to the user? How many of these steps can go wrong (and what impact does that have)? Can these steps be automated in a way that will make them reliable (and no longer a pain point)? Discuss them with your stakeholders. Translate them into potential costs (if you can). It can be pretty easy to brush off a fancy new technology, but it is never easy to brush off money/velocity/quality actively being lost.

What is the impact of different silos on the CICD process?

Different silos cause different outcomes, mostly constrained by the communication structures of these silos. Regardless of which functions are siloed, the DevOps community has observed the 4 key metrics, known as the DORA metrics: Deployment Frequency (How often an organization successfully releases to production), Lead Time for Changes (The amount of time it takes a commit to get into production), Change Failure Rate (The percentage of deployments causing a failure in production), and Time to Restore Service (How long it takes an organization to recover from a failure in production). Furthermore, DORA’s research over many years shows that business outcomes such as Profitability, Market share, and Productivity. In short, there is high correlation between silos of any kind and poor business performance.

What do you guys think about handling CICD and software development by the same team?

This is a core tenet of DevOps. You build it, you run it. This empowers teams to make the necessary changes to make things work smoothly. More info:

Technical use cases 

We have a Jenkins instance. I created a declarative pipeline that is in its infancy stage. I wanted to further test what I learned via PluralSight utilizing Docker, but I do not know if I can set this up on the same server that has our existing Jenkins Instance.

Docker is a great way to encapsulate your dev tooling and is the underlying technology in both Bitbucket Pipelines and Circle CI, proving your principle is sound. What you will need on your Jenkins agent is a Docker client (maybe Docker or an analog like Podman); the containers and hosting can run on a dedicated Docker host if that helps isolate the compute resources. 

Is there a way to edit and change code, pull it, push it, etc. in Bitbucket directly through the IDE without having files locally on machines?

Bitbucket cloud work with Eclipse Che also supports this on K8s clusters.

What should be the ideal approach to have CICD for IAAC (handling create / update/ delete)?

I would use the right tools for the right job. Generally, it looks like Terraform and Pulumi are great at this. Standing up infra is powerful through this way. But, as always, be careful!

We want to switch to Kanban for CI/CD but there is some validation that falls outside of the engineering team. How do we capture that work in Kanban? Is it a new ticket for the resource responsible for completing that task or does the original task go back to "To Do" and be passed to that resource?

Agile & DevOps both emphasize how expensive it is to have those kinds of cross-team dependencies and recommend removing them. To 'simulate' that ideal, maybe members of the external team can 'act' like they are on the team and stay in the same Jira project. Or, use Automation to help coordinate issue workflows across projects.

How do we write CI CD yaml file from Scratch on Bitbucket? 

We are looking for a website monitoring integration with Confluence (eg, Postman integration with Confluence). We want to monitor the URL to see if its getting 200 OK response or 404 Error, then update a Confluence table automatically as "Running" or "Failed" based on the response received. Are there tools that can do this?

That’s an interesting use case for Confluence. I can confirm there aren’t any Marketplace Apps that perform that exact function. There are some generic reporting tools that might provide a sufficient bridge. Otherwise, you might build a Confluence Connect App using the “dynamic content macro”:

Have you successfully implemented CI/CD for a monolith? What were the gotchas? What were the celebrations?


  • Many complex integration tests can result in a very long testing job. If your language/tooling supports annotation of tests, it is likely worth investing some time into so you can run tests in stages (cheap unit tests on every push, expensive integration/E2E/acceptance tests when merging to release/main).
  • Cache build artifacts and dependencies. The bigger your monolith is, the longer it will take to build. Caching dependencies and artifacts that are static in nature will usually result in a faster and more efficient pipeline.
  • Start small and iterate. It is important to identify pain points of your development and release processes. Sometimes it will take multiple passes to effectively automate your way out of these pains, so break everything into small and easy to understand steps/jobs - get your tests running. 


  • Reduction of cognitive complexity. Most of the monoliths that I have implemented pipelines for have required some pretty intricate manual processes, often without sufficiently clear documentation. Being able to transfer that cognitive overhead into an easy to reference configuration file frees up mental resources to focus on the actual application rather than how to get that application to my users.
  • Quality increase. Even with a well-organized codebase, monoliths can become quite difficult to build a mental model of. Ensuring that the development processes we define are followed every single time a push is done can help us catch sneaky edge cases and easy-to-miss bugs.
  • Velocity increases. When I can be confident that my pipeline will catch the majority of my mistakes and reliably deploy my application, I can focus all of my effort into the technical problem solving and get a lot more done.

Assume this is our branching strategy: dev->qa->release->master. I have a hotfix branch and it is deployed to Production. If I want to merge the HF branch into current development, where should I merge it for best practice or better flow?

There are two ways I’d recommend handling this problem

  • Defining an alternative workflow exclusively for deploying hotfixes that would look something like hot-fix->release->master/special-deploy. Again, this would just be an alternate workflow and once I’ve extinguished the fire, I would then push my changes through the standard workflow.
  • Adding hot fix-specific parameters to my pipeline configuration (to imply a hot fix variant of my standard workflow) and preparing a hot fix procedure with each team/stakeholder involved so that when a hot fix is deployed any optional steps are skipped and any manual tests/interventions are prioritized.

What’s your tried and tested suggestions to store the pipeline logic as code?

Store your configuration files in a VCS - storing CI/CD configuration files in a VCS is both the most versatile and reliable option. If proper controls are enabled, like branch protection, a VCS will provide versioning (useful for audits and rollbacks) and accessibility. Both of which are important for experimentation. Perform experiments on your configuration when your project grows in size/complexity, when your CI/CD provider releases new features (the vast majority of these are built to make your job easier), when your development processes change, and when you learn about new aspects of your system and processes.

Going outside the context of typical software engineering, where velocity is 'less important' than reliability  - would the general approach of CICD change for Data science and ML models and workflow (Eg: 100's of model API's)?

Ensuring that models are being built and deployed reliably is crucial. The ideals behind CI/CD are universal and in my experience, are particularly useful for ML-focused workflows.

Best Practices 

How would you handle changes that might need to jump ahead of other changes? For instance, we have a feature that needs human review which might take several days to approve, but then after that feature is committed, a bug fix which needs immediate release is committed. How do you get the bug fix to production first?

This is a great use case for feature flags. A good default way to handle this scenario is to put the first change behind a feature flag, giving your team more granular control over which changes get enabled when. This would allow you to deliver the bug fix before enabling the new feature, even if the changes were merged in the reverse order.

Should code developed for CICD pipelines be documented/tested as code for the "core" company application?

It depends on what the code is doing, but if the code is not self-documenting in nature then it would certainly be easier for newcomers to understand what the intended behaviors are with some clarifying documentation.

What is a best practice to implement CICD for an environment without internet access?

A great practice for offline environments is to enable pre-commit hooks to run your test suite and build actions (assuming they can be done locally).

Are there best practices to deploy software to multiple on-site self-service kiosks that need to work locally because of hardware?

You might find some value in the following resource:

What are best practices for regularly updating different tools\software\etc?

Regularly updating dependencies is a part of good engineering hygiene that is easy to overlook. Standardizing on scanning tools like Snyk or SourceClear to flag dependencies with known vulnerabilities is a good way to ensure teams aren't neglecting to keep their dependencies up to date. It’s also a great way to validate packages before and during builds to be proactive about securing your dependencies against known exploits or supply chain attacks.

How can we minimize the run time of pipelines?

Pipeline run times are tricky. Getting their execution times down depends on a variety of factors. Can you split up the work across agents? Can you load work up-front and perform the work on the developer's machine? Do you really need to perform each step every time you commit to the code repo? You need to think critically about the steps involved when constructing your pipeline because the decisions you make will impact run-time performance.

What is the order of operations for getting a project up to speed after swapping projects where you may go from walking back to crawling?

A lesson from the "Lean Software" world is that "task switching" (like between projects) is expensive. But teams can reduce this by creating a "Project API." Some developer productivity can be reclaimed with the right tools inside the repo: a good README (careful not to get too long), environment bootstrapping (more or less a standard part of dependency management these days, like pyenv/pipenv), and a "build script" to make it easy to do common things (yes, even when so many devs use IDEs, the CLI still has a place).

There are a lot of products in the market today that provide good and solid CICD solutions. Do you think that companies that stick to "the old way" using in-house CICD servers, is a bad practice? I've worked a lot with mobile applications (games, apps, etc.) and it's difficult to migrate.

This really depends on your environment and needs. If, for legal reasons, you require complete control of your data in a way that is easily auditable, an on-prem deployment makes sense. As it does if you're building games and regularly transferring large files/assets. However, we all continue to improve our products/offerings and this may not be the case in the future.

Any advice for dealing with multiple repositories/pipelines which all have to share the same set of pre-production environments?

This kind of indirect coupling between pipelines is less than ideal while also incredibly common. What can work well is to treat the pre-prod configuration like another versioned dependency, as with upstream libraries. The repos should declare which version of the environment they are compatible with, and as configuration changes, the pipelines should test if the repos are compatible with where they are being deployed. In short, seek to make each repo’s dependencies on the common environment explicit, and use CI/CD to test them. That way, the repos & their pipelines will tell you when there are change collisions or change drift.

Feature flags are controlled by developers? We've been using feature flags for about 7 years, though we call them entitlements. They are able to be turned on/off for any user or group of users and our customer support team has always managed it.

I don't think there is a one-size-fits-all approach to feature flags. It will depend on the use case. Giving support teams direct access to toggle functionality on/off makes a lot of sense to me. At Atlassian, we also use feature flags to control backend changes (e.g. performance optimizations that are more appropriate for developers to control as they monitor logs, system metrics, etc).

From a consultant perspective, when working on client applications for a limited time, what are your tips for best supporting the client at their DevOps maturity level, while making sure your delivery is enabled for success, making sure when you leave the client is left in a good position to keep going?

Start small and build up an automated pipeline over time. Make sure the customer understands what you're automating and why. Also, take the time to explain to them how to make changes once you move on to another client. People, processes, and tools!

What is the best approach for setting up CI/CD for apps (Android, iOS, Windows)?

  • One of our panelists explained, “CI = Code to Artifact and CD = Artifact to Prod/Customer”. So CI for apps is pretty straightforward. The Apple world is a little tricky with custom hardware so you might want to select a CI tool that has Apple support built-in. You can find Atlassian’s overall guidance on CI here:
  • According to that simplified definition, CD is trickier because part of the “artifact to customer” flow includes an app store that you don’t control and with limited means for automation. App store policies and tooling sometimes leave few options but “batched deployment” (the opposite of continuous). But you can still apply many of the practices of continuous delivery, which means automating so much of deployment that you hand “a button” to a non-technical person and make deployment a business decision, rather than a technical chore. 

What tests can be considered to run on your pipeline’s CI/CD, can it be only a test for the apps readiness for production config enough?

A good starting point would be:

  • Run unit tests and integration tests after pushes to feature branches
  • Run end to end system tests after deployment to Test and Staging environments
  • Run unit tests and integration tests after pull request merge to mainline
  • Run end to end system tests after each deployment to a production environment

When increasing automation and CICD practices, there are more parameters to look for, more breaking points, and the process becomes more complex. Any opinion on that?

CI/CD automation is software, just like the application that flows through it. As such, it’s wise to observe the complexity and manage it. We can take inspiration from traditional computer science metrics to look for things like number of branches, number of dependencies, or even just lines of code, and work toward “elegance,” the least amount of code to solve the problem. Overall, increasing pipeline complexity translates into cognitive complexity, which is bad. Everyone on the team should understand how the pipeline works and be able to fix it when it breaks. The KISS principle applies very strongly to CI/CD.

Do you have any experience with DevOps & Scrum? Do they conflict? Who should lead either process?

With file-based pipelines definitions inside repositories, what's the recommended way to not repeat yourself?

In general, the mechanism is just like code: shared functions. As Atlassian example, Bitbucket Pipelines makes use of YAML anchors to allow for re-use of steps:

Is CICD an appropriate method to build a new web-platform from scratch or is it better suited for product development on top of an existing app/site?

Yes. In fact, it is easier to do it when starting from scratch because all design decisions can be made with the requirements of CICD upfront. Moving to CICD after the fact is usually more difficult than adopting it from the beginning.

How can common CI/CD techniques be applied to microservices?

The key connection between microservices and CI/CD is the idea of deploying each microservice independently. Ideally, microservices help make it simple to make changes because the “blast radius”, not just for code but for production too, is limited by the independence of microservices. Unfortunately, many teams struggle with interpreting both CI/CD and microservices advice and end up with 1 pipeline for all the services (known as a distributed monolith).

Do you use sub-git modules for build & deployment?

Yes. Git submodules are useful tools. See:

What or how do you monitor your CICD pipelines to know what is in Prod and if there are any issues in Prod?

Your CI/CD tool should tell you if there are issues in your pipeline while creating a production release/deployment, and ideally fail if there are. Additional performance visibility is important, and for that I recommend using observability tools in all of your environments (QA, staging, production, etc). Your applications and services should be instrumented and if there are any post-deployment configuration of those tools, then that configuration should be included as part of your pipeline.

Where do you store configuration files? How do you promote change when adding a new property?

Source control and some kind of CI/CD pipeline

Any advice or lessons learned with CICD that uses gigabytes of tools and outputs gigabytes of assets and state through the steps? Lots of CI seems to assume (e.g. "npm install") is cheap in each step. Or copying them is fast. Equally, we don't want the "special" build pet server.

This is a tough question - transferring many large files is computationally expensive and from what I can tell, there is no clever way around that. The best advice I can give to avoid having a specialized server would be to use as many static assets as possible and cache them when possible. When static assets and caches are not possible, parallelizing the compute required to create such assets can be a great way to speed up your pipeline, but again, this isn’t always an option.


How many anti-patterns are a result of technical debt that is being accommodated for?

So many! I think effective CI/CD practices help bring that technical debt to light. In a prior webinar, we went into a handful of practices (and some related tools) that should help teams understand their technical debt a bit better, and to help prioritize when that debt seems overwhelming:

Feature flags feel like an anti-pattern in the sense that it may create a combinatorial explosion of problems. Is this a DevOps CI/CD problem or an implementation/usage problem?

That's a keen observation. In the early days of feature flags, many teams noticed an explosion of complexity when they left feature flags in code forever. Since, many teams, and even the feature-flag management tools have come to understand there is a lifecycle to feature flags. It is critical to understand that each is a tiny bit of technical debt (intentionally incurred for the benefits mentioned on the webinar) but that debt must be repaid by removing them after the feature has become "normal." Be sure to manage the "work in progress" for feature flags: have a practice to regularly review and remove when you can.



Is there a plan to support reusable/extendable YAMLs (bitbucket-pipelines.yml fragments) to centralize and affect several projects at once?

The Bitbucket team is working on this (shared configuration files) and plan to ship support in 2023.

How can we manage Bitbucket variables more easily with automation?

You could point your automation at Bitbucket's REST API for Bitbucket Pipelines. Specifically, the "pipelines" resource is available to help manage variables:

What makes Atlassian CICD better than other CICD tools out there?

"Better" is subjective. I'd advocate to use what tools make you happy and get the job done. Integration is the name of the game when it comes to CI/CD. I wouldn't pick just one tool for everything.

Is there any proper way to generate an auth token for Bitbucket REST api for only backend without using oauth?

Yes, Bitbucket supports app passwords:

Is there a difference between Jenkins and Bamboo?

Is there documentation on the best way to securely pass variables from one step to another in Bitbucket (for example, if the variable is generated in step 1, but needed in step 4)? I've passed values by writing to artifacts, but that doesn't apply for values that should not be shared.

We don't recommend using artifacts to pass secrets or other sensitive information between steps, as the information will be accessible by anyone with repository read permissions in the "Artifacts" tab. We have future plans to offer this as first class product feature. For now, you can achieve it by:

  • Using BBC API to store and read secured variables as part of your build.
  • Store and read secrets to/from a third-party system: S3 bucket, external database, etc.

How would one run Cypress tests in a Bitbucket pipeline?

Are there any plans to add project-level variable groups in Bitbucket?

This is on the Bitbucket product roadmap for 2023.

At what point should we consider moving from Bitbucket Pipelines to something like Jenkins?

Depends on the needs for your organization. Jenkins isn't a magic bullet. Yes, it's popular but it does have its quirks. Like any software, you need to evaluate it carefully.

How do you think about the deployment of a feature that spans multiple (Bitbucket) repositories and a sort of ‘combined’ pipeline to deploy it since they depend on each other?

Take a look at: This can be setup to trigger one pipeline from another, enabling the building of dependent pipelines.

How do you restrict who can run a Bitbucket pipeline?

Any user with Write access can trigger a pipeline, but you can use Merge Checks and Branch restrictions to limit which users can trigger pipelines on specific branches.

How do you use Deployments under Bitbucket to push builds to the k8s cluster?

Open DevOps and Jira

How does Atlassian's solution for DevOps compare to other CI/CD integration solutions, such as Opsera?

Opsera is good at orchestration and stitching integrations together with its low-code/no-code solution. Both (however) require an integration to be built.

Versioning is one of the hardest things I have to deal with. Are there any tools that help with this?

Jira Cloud has long had a feature specific to the "versions" you are trying to manage: One of the things that makes versioning hard is all the incoming and outgoing dependencies within an org, especially for distributed architectures like microservices. To help with the new facets of distributed architecture, you might want to learn more about Compass:

How do you create a flow with approvals in Jira?

We automate our user access to Jira Cloud via Atlassian Access. When will Bitbucket integrate to Atlassian Access?

Bitbucket supports some features of Access such as SAML, password policies, etc. today. You may be asking about shared user management (i.e. defining groups once and assigning those groups permissions in Jira, Bitbucket, etc.). The team is working on that now and plans to ship broadly in 2023.

What should be a generic workflow in Jira with DevOps?

  • Issues live in a backlog
  • Issues are moved into a sprint during sprint planning
  • Issues from a sprint are moved to in progress by a developer during the sprint
    • If the developer gets blocked, the issue is transitioned to blocked
    • If the developer and management decide to pause that work, it goes back to the backlog
  • Once a piece of in progress work is implemented but not deployed it goes to In Testing
  • Once a piece of in testing work passes any testing requirements in the test environment it goes to In Staging
  • Once a piece of in staging work is ready to go to production it moves to the first production regions
  • Each production region has an associated workflow step
  • After production deployment is done the issue is moved to Done

We are new to this practice: CI/CD, DevOps, etc. Does Atlassian provide resources to get started?

We have a number of tutorials relating to DevOps practices here:

Can we automate creating subtasks for different types of issues if we have Jira Cloud?

Jira Cloud has a built-in automation feature that would allow you to create subtasks: Whether that will work might depend on "when" you want to create the subtasks. Like triggered by which events.

Is there a good reference architecture for DevSecOps with Bitbucket? We are trying to figure out the best way to do things like storing .env files and supplying them on build time.

Here is an end-to-end example in his prior webinar:  And there are some great follow-on resources that explain how to weave Snyk, as an example of a DevSecOps tool, into that pipeline:

How do you achieve self healing systems and create urgent Jira tickets if something goes wrong and needs attention?

Monitoring and alarming are paramount here to detect the issue as soon as possible. If it's a problem caused by the deployment lifecycle, try to catch it before production deployment. Once a problem is detected, toggle feature flags back to pre-problem values. Raise a high priority Jira issue. Put it in the sprint and assign it to whoever is working on the original piece of buggy code. Notify everyone in the on-call hierarchy. The developer can root cause, write some tests, and roll forward a fix and catch up to the region where the issue was detected.

We are using Jira and Confluence. We will start using Bitbucket shortly. Is there a video or document on how we can setup CI/CD pipeline using Atlassian from scratch?

Take a look at these guides, specifically the Deploy section:

Can we leverage DevOps for non-software related processes, for example New Employee > Active Directory autorisations > create mailboxes > ready to work?

Look into using Trello to easily build cooperative workflows.

Currently, we have engineers writing scripts for specific actions. How can we leverage Jira/Bitbucket effectively to automate triggering such scripts (remove manual 'run' of scripts)?

Bitbucket pipelines can invoke your scripts. Put them in a repository. Clone the repository during the CI/CD pipeline run, and execute the script with appropriate parameters. In this way you can write generic scripts, and reuse them in many pipelines.


How can we leverage iam roles instead of AWS access keys in our pipelines?

Check out this documentation from Harness for more information:

How can we run security scans using pipelines on code while pushing it?

Harness offers Security Testing Orchestration, which you can learn more about here:

Where can I get more information about feature flags?

Check out this resource from Harness:

How do you go about running Terraform within a CI/CD pipeline dedicated to a single microservice, while the Terraform state belonging to this microservice is only a part of the Terraform state of the environment you're deploying into?

Harness has technical documentation for using Terraform:

What are good resources that explain best practices when first implementing feature flags to CICD?

Here are some best practices from the Harness blog:

When a feature flag is toggled for 10% of customers, how do you decide which 10% of customers get a feature that was toggled?

You can toggle based on any variable you want to segment on; e.g. region, demographic, customer profile, etc.

Is there an objective way to balance risk reduction vs velocity?

Check out this resource from Harness for more information:

Circle CI

How best can one prioritize and manage a phased approach to automating? Any tips for socializing and (people) change management to adopt and adapt to the automation?

The best strategy I’ve found to manage automation phases is to take an iterative approach and go step-by-step - start with getting your automated tests to run on every commit/push. Once that works reliably, create a build artifact. Then onto deploying that artifact. As for socializing and getting non-technical stakeholders invested in change management solutions, the key is to present a problem that can be automated and a solution. If it takes 3 hours to manually deploy all of your services and you present your stakeholders with an option to automate this reliably, it will be very difficult for them to justify not embracing the proposed automation.

How can we ensure confidence in others outside of the team so that we can continue to encourage improving our automation for a better CICD?

We can ensure confidence and support from our peers by taking time to discuss what goals are most important to them. Then working with them to write automations that achieve, support, or validate those goals.

What materials (design guides or examples) are available for CircleCI self-hosted runners?

There are a few pages in our documentation and on our blog:

In addition to these docs, you can also ask questions in our discussion forum - - or submit a request to our support team via

AUG Leaders

Atlassian Community Events