Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

Top "Effective DevOps pipelines with Atlassian and AWS" webinar questions

Thanks to all those who joined us for our webinar "Effective DevOps pipelines with Atlassian and AWS" on March 30 and 31! If you missed it, want to watch it again or share it with your team you can watch it ondemand here.

As a recap, the webinar covered:

  • Using Bitbucket Pipelines to build, test, and deploy to AWS Lambda
  • Managing and tracking changes in Jira Software through to deployment
  • Using Opsgenie and AWS CloudWatch to monitor for problems, and alert on call in real time

We had over 150 questions during the webinar and below you'll find answers to some of the most common questions asked. If there's something else you want to know ask us below and we'll do our best to get them answered!

What is the DevOps loop?

The DevOps loop is the lifecycle that encompasses the entire software development flow between development and IT. The loop is a convenient way to show what that flow looks like. You can read more about the loop and DevOps lifecycle here:

How are secrets stored in a Bitbucket repository?

More information about variable and secret management can be found here:

In addition to this, we recently announced Bitbucket Pipelines support for OpenID Connect and use that to talk to any third-party application that supports it:

Can we use another tool other than Bitbucket or Bitbucket Pipelines?

Yes! It’s definitely possible to plug and play different tools (like Jenkins, GitHub, GitLab etc.) throughout this workflow. Please note that the integration and the features on offer will differ depending on the tool you choose and what you see with Bitbucket Pipelines.

Can we run different steps in parallel?

Yes it’s possible to set up and run parallel steps in Bitbucket Pipelines. See our docs for more information:

What if your process requires a manual step (e.g. wait for approval) before pushing update to production endpoint?  Can this handle situations like these?

Yes you can use pipeline triggers to set up a manual steps that cover scenarios like that.

More information can be found here and you can also implement manual steps in parallel groups too:

Can we deploy to AWS or GCP kubernetes with the Bitbucket Pipelines?

Yes it’s possible to deploy to services such as EKS, GKE, and even AKS easily using the pipes functionality in Bitbucket Pipelines. A list of pipes available for use can be found here.


Sample code

Finally, here is the sample yaml code used during the webinar

image: python:3.8
    -step: &unit_test
        name: Unit Test
          - pip install pytest
          - python3 -m pytest -v awslambda/tst/ --junitxml=test-reports/report.xml
          - apt-get update && apt-get install -y zip
          - zip -r awslambda
    -step: &deploy_to_preprod
        name: Deploy to PREPROD
          - pipe: atlassian/aws-lambda-deploy:0.6.0
              AWS_DEFAULT_REGION: 'us-west-2'
              FUNCTION_NAME: 'arn:aws:lambda:us-west-2:756685045356:function:test001'
              COMMAND: 'update'
          - rm -rf /opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/aws-lambda-deploy-env
    -step: &integration_test_preprod
        name: Integration test PREPROD
          - pip install pytest requests
          - python3 -m pytest -v --url="" --expected_value="\"apple\"" 
            awslambda/tst/ --junitxml=test-reports/report.xml
    -step: &deploy_to_prod
        name: Deploy to PROD
          - pipe: atlassian/aws-lambda-deploy:0.6.0
              AWS_DEFAULT_REGION: 'us-east-1'
              FUNCTION_NAME: 'arn:aws:lambda:us-east-1:756685045356:function:test001'
              COMMAND: 'update'
    -step: &integration_test_prod
        name: Integration test PROD
          - pip install pytest requests
          - python3 -m pytest -v --url="" --expected_value="\"apple\"" 
            awslambda/tst/ --junitxml=test-reports/report.xml
    - step: *unit_test
    - step:
        <<: *deploy_to_preprod
        deployment: PREPROD
    - step: *integration_test_preprod
      - step: *unit_test
      - step:
          <<: *deploy_to_preprod
          deployment: PREPROD
      - step: *integration_test_preprod
      - step: 
          <<: *deploy_to_prod
          deployment: PROD
      - step: *integration_test_prod

1 comment


Log in or Sign up to comment

Should we deploy master to staging before deploying it to production? Or testing our changes in feature branch, merging it to master and deploying master directly to production is an acceptable way? What is recommended? And why?

AUG Leaders

Atlassian Community Events