This article contains answers to questions from the Q&A portion of the recent Scale CI/CD workflows faster with Bitbucket Pipelines webinar:
The nature of how deployment locks are implemented prevent the usage of parallel deployment steps.
We recently released Environments, which provide all the permission management of deployment environments without locks. We intend to extend this concept to apply the regular deployment features without a highly opinionated lock.
A stage's principal purpose is to allow multiple steps with deployments to work in a groupable fashion.
We are planning new features that may provide the compartmentalization experience you’re looking for. We’ll have more to say about that soon.
Not at present. We’re looking at this problem in the near future. You can workaround this with dynamic pipelines, but that's not the future state.
You can define as many pipelines in a bitbucket-pipelines.yml file as you like. You can also define multiple Test, Staging, and Production environments. Each pipeline you define can have an independent trigger.
https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/
Consider using buildx in runtimev3 https://docs.docker.com/reference/cli/docker/buildx/build/ and the caching functionality https://support.atlassian.com/bitbucket-cloud/docs/enable-and-use-runtime-v3/
In the future, we’ll consider a tighter integration with https://www.atlassian.com/wac/roadmap/cloud/Docker-Image-Registry?status=future;comingSoon&p=b4c45d8d-68 when this is released.
Our Test step previously consisted of two make targets: make test and make coverage which were executed sequentially, whereas the SonarQube only depended on the output/artifact generated by make coverage target.
As it turned out, make coverage didn’t depend on the execution of make test, so we refactored this so that the two steps run in parallel:
Test: only make test
SonarQube: make coverage && sonar
Setting up self-hosted CI/CD on a personal VM is complex for several reasons:
Infrastructure Management
• Resource allocation: You need to properly size CPU, memory, and storage for your workloads
• Network configuration: Setting up proper networking, firewalls, and security groups
• Operating system maintenance: Regular updates, patches, and security hardening
• Backup and disaster recovery: Ensuring your CI/CD infrastructure and data are protected
Security Challenges
• Access control: Managing SSH keys, user permissions, and service accounts
• Secret management: Securely storing and rotating API keys, passwords, and certificates
• Network security: Configuring VPNs, SSL/TLS, and preventing unauthorized access
• Compliance: Meeting security standards and audit requirements
CI/CD Tool Complexity
• Installation and configuration: Setting up tools like Jenkins, GitLab CI, or GitHub Actions runners
• Plugin management: Installing, updating, and configuring necessary plugins
• Pipeline configuration: Writing and maintaining complex build/deploy scripts
• Integration setup: Connecting to version control, artifact repositories, and deployment targets
Scalability and Reliability
• High availability: Setting up redundancy and failover mechanisms
• Auto-scaling: Handling variable workloads and resource demands
• Monitoring and alerting: Implementing comprehensive observability
• Performance optimization: Tuning for build speed and resource efficiency
Maintenance Overhead
• Regular updates: Keeping all components current and secure
• Troubleshooting: Diagnosing and fixing issues when they arise
• Capacity planning: Monitoring usage and scaling resources appropriately
• Documentation: Maintaining runbooks and procedures
Is it possible to decouple building the components from deploying them? If so, you could put a condition on the build step so that it only builds when the component's code is changed.
The deployment step could be triggered whenever you want, and it would use the most recent artifact from the build step. You’d probably want to upload the artifact to something like AWS S3.
See this for additional information on conditions: https://support.atlassian.com/bitbucket-cloud/docs/step-options/#Condition
You would want to create a dynamic pipeline that injects the secret scanning step in every pipeline executed in the workspace.
Learn about dynamic pipelines here: https://support.atlassian.com/bitbucket-cloud/docs/dynamic-pipelines/
A simple flow is to use secured variables or a third-party secrets integration plugin and inject them as environment variables. A more complex option with better security is to use our OIDC integration with third parties.
See this for a full explanation: https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/
We support Windows builds via self-hosted runners.
This is being actively investigated.
Yes, you can configure Bitbucket Pipelines to use self-hosted runners for some steps and Atlassian runners for others. You specify a runs-on configuration for steps using self-hosted runners.
Learn more here: https://support.atlassian.com/bitbucket-cloud/docs/configure-your-runner-in-bitbucket-pipelines-yml/
Every step is running in an isolated container that is not directly using the Docker runtime.
If Docker service is configured on a step, then we spin up a Docker daemon that requires an additional resource allocation.
Customers can control how much memory is reserved via yml configuration
It is supported in runtimev3 https://support.atlassian.com/bitbucket-cloud/docs/enable-and-use-runtime-v3/
Yes, it is possible to use Terraform in Bitbucket Pipelines.
A pipeline can be as simple as:
image: hashicorp/terraform:latest
pipelines:
default:
- step:
oidc: true
script:
- terraform init
- terraform validate
- terraform plan
- terraform apply -input=false -auto-approve
The oidc:true option provides an OIDC token during the build that can be used for authentication.
https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/guides/service_principal_oidc
Most of the build setup time is spent on cloning.
If the repository is really big, consider doing more shallow clones.
Disabling LFS if it is not required for the build can help reduce time.
We have Atlassian-managed pipes, vendor-managed pipes, e.g., Snyk, and you can also create your own pipes.
For an example of a 3rd party pipe see: https://docs.42crunch.com/latest/content/tasks/integrate_bitbucket_pipelines.htm
Bitbucket Pipelines and Pipes are currently a cloud-only feature.
This has been shortlisted for development, but we cannot indicate a timeline yet.
Can you expand on the use case here? You can put the manual step at the point where you want there to be a terminal stop/end to the workflow.
What we demoed was a pre-release feature. We plan to ship this to customers later this year.
https://support.atlassian.com/bitbucket-cloud/docs/code-insights/
It’s currently in beta and free, along with several other AI tools. Learn more and sign up for early access here.
Yes, code review and approval functionality are available on all tiers as part of your plan.
We use a combination of GPT-4o and Sonnet3.5 and are always evaluating and updating to newer models to improve performance. The workflow (the running environment) is hosted within Bitbucket Pipelines, but the models are not; we just use the providers.
Very vague and unimportant:
Has this always been the case, or has it gotten better/worse recently?
Good comments drown in this sea of unusable comments:
We are working on surfacing labels of categories to help people differentiate. It will be there in the next quarter.
Trigger a pull request pipeline manually:
This is coming soon in the coming quarter as well.
To help make comments more tailored, we are now rolling out ‘customization’ to help users instruct the Reviewer agent.
Get started
Read our guide to learn where to create customization files in your repositories, and see example instructions and standards to improve your results. https://rovodevagents-beta.atlassian.net/wiki/external/MWY0OGFmODRmNTYxNGFhMWE5OGQxMzYzYjA4NjQ3OGI
On every commit to the pull request, the same truckload of comments was made:
Reviewer agent is only commenting on the 1st commit at the moment, but not subsequent commits.
For Jenkins, we have CLI tools that convert Jenkins modules to Pipelines syntax. Here is the documentation: https://support.atlassian.com/bitbucket-cloud/docs/how-to-migrate-from-jenkins-to-bitbucket-pipelines/
This blog outlines some of the benefits: https://community.atlassian.com/forums/Pipelines-articles/6-reasons-to-modernize-your-CI-CD-with-Bitbucket-Pipelines/ba-p/2724325
It depends on your current implementation. We recommend migrating a few of your pipelines to understand the effort required. We also have certified partners who can help you with your migration.
Here’s how you can set up JSM change management with Bitbucket https://www.atlassian.com/software/jira/service-management/product-guide/getting-started/change-management#how-it-works
Atlassian Analytics can be used to get DevOps data for DORA Metrics https://support.atlassian.com/analytics/docs/schema-for-devops-data/
Compass is sold as a separate product and requires its own license.
Bitbucket is a SCM and CI/CD tool. Compass is an internal developer platform that helps software teams manage their software components, get metrics, and more.
Learn more here: https://www.atlassian.com/software/compass
For more information on how Bitbucket and Compass work together, please take a look at:
CI/CD is charged based on build minutes used. There are free minutes included in your plan to help you get started. You can add more minutes to your plan as needed. https://www.atlassian.com/software/bitbucket/pricing
Warren Marusiak
0 comments