Top "Effective DevOps pipelines with Atlassian and AWS" webinar questions

Thanks to all those who joined us for our webinar "Effective DevOps pipelines with Atlassian and AWS" on March 30 and 31! If you missed it, want to watch it again or share it with your team you can watch it ondemand here.

As a recap, the webinar covered:

  • Using Bitbucket Pipelines to build, test, and deploy to AWS Lambda
  • Managing and tracking changes in Jira Software through to deployment
  • Using Opsgenie and AWS CloudWatch to monitor for problems, and alert on call in real time

We had over 150 questions during the webinar and below you'll find answers to some of the most common questions asked. If there's something else you want to know ask us below and we'll do our best to get them answered!

What is the DevOps loop?

The DevOps loop is the lifecycle that encompasses the entire software development flow between development and IT. The loop is a convenient way to show what that flow looks like. You can read more about the loop and DevOps lifecycle here: https://www.atlassian.com/devops

How are secrets stored in a Bitbucket repository?

More information about variable and secret management can be found here: https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/

In addition to this, we recently announced Bitbucket Pipelines support for OpenID Connect and use that to talk to any third-party application that supports it: https://bitbucket.org/blog/bitbucket-pipelines-and-openid-connect-no-more-secret-management

Can we use another tool other than Bitbucket or Bitbucket Pipelines?

Yes! It’s definitely possible to plug and play different tools (like Jenkins, GitHub, GitLab etc.) throughout this workflow. Please note that the integration and the features on offer will differ depending on the tool you choose and what you see with Bitbucket Pipelines.

Can we run different steps in parallel?

Yes it’s possible to set up and run parallel steps in Bitbucket Pipelines. See our docs for more information: https://support.atlassian.com/bitbucket-cloud/docs/set-up-or-run-parallel-steps/

What if your process requires a manual step (e.g. wait for approval) before pushing update to production endpoint?  Can this handle situations like these?

Yes you can use pipeline triggers to set up a manual steps that cover scenarios like that.

More information can be found here and you can also implement manual steps in parallel groups too: https://bitbucket.org/blog/manual-steps-in-parallel-groups-available-for-pipelines

Can we deploy to AWS or GCP kubernetes with the Bitbucket Pipelines?

Yes it’s possible to deploy to services such as EKS, GKE, and even AKS easily using the pipes functionality in Bitbucket Pipelines. A list of pipes available for use can be found here.

 

Sample code

Finally, here is the sample yaml code used during the webinar

image: python:3.8
definitions:
  steps:
    -step: &unit_test
        name: Unit Test
        script:
          - pip install pytest
          - python3 -m pytest -v awslambda/tst/aws_lambda_demo_test.py --junitxml=test-reports/report.xml
          - apt-get update && apt-get install -y zip
          - zip -r lambda.zip awslambda
        artifacts:
          - lambda.zip
    -step: &deploy_to_preprod
        name: Deploy to PREPROD
        script:
          - pipe: atlassian/aws-lambda-deploy:0.6.0
            variables:
              AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
              AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
              AWS_DEFAULT_REGION: 'us-west-2'
              FUNCTION_NAME: 'arn:aws:lambda:us-west-2:756685045356:function:test001'
              COMMAND: 'update'
              ZIP_FILE: lambda.zip
          - rm -rf /opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/aws-lambda-deploy-env
    -step: &integration_test_preprod
        name: Integration test PREPROD
        script:
          - pip install pytest requests
          - python3 -m pytest -v --url="https://kez2kqr6ad.execute-api.us-west-2.amazonaws.com/default/test001" --expected_value="\"apple\"" 
            awslambda/tst/aws_lambda_demo_integration_test.py --junitxml=test-reports/report.xml
    -step: &deploy_to_prod
        name: Deploy to PROD
        script:
          - pipe: atlassian/aws-lambda-deploy:0.6.0
            variables:
              AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
              AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
              AWS_DEFAULT_REGION: 'us-east-1'
              FUNCTION_NAME: 'arn:aws:lambda:us-east-1:756685045356:function:test001'
              COMMAND: 'update'
              ZIP_FILE: lambda.zip
    -step: &integration_test_prod
        name: Integration test PROD
        script:
          - pip install pytest requests
          - python3 -m pytest -v --url="https://3wds8qe7ik.execute-api.us-east-1.amazonaws.com/default/test001" --expected_value="\"apple\"" 
            awslambda/tst/aws_lambda_demo_integration_test.py --junitxml=test-reports/report.xml
pipelines:
  default:
    - step: *unit_test
    - step:
        <<: *deploy_to_preprod
        deployment: PREPROD
    - step: *integration_test_preprod
  branches:
    mainline:
      - step: *unit_test
      - step:
          <<: *deploy_to_preprod
          deployment: PREPROD
      - step: *integration_test_preprod
      - step: 
          <<: *deploy_to_prod
          deployment: PROD
      - step: *integration_test_prod

1 comment

Swarna Sharpa
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
April 5, 2021

Should we deploy master to staging before deploying it to production? Or testing our changes in feature branch, merging it to master and deploying master directly to production is an acceptable way? What is recommended? And why?

TAGS
AUG Leaders

Atlassian Community Events