Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

How to create a glue job by running the bitbucket pipeline

How can i create a automated pipeline in bitbucket to push the files from bitbucket repositories to s3 buckets and on that  how can we create a glue job by using the same pipeline by using the files present in s3 bucket.

1 answer

0 votes

Hi @srikanth

Thank you for your question!

It's a good case to use Bitbucket Pipes.


Use aws-s3-deploy pipe to deploy files to S3. Add pipe to your bitbucket-pipelines.yaml configuration:

  - pipe: atlassian/aws-s3-deploy:1.1.0
      AWS_DEFAULT_REGION: 'us-east-1'
      S3_BUCKET: 'my-bucket-name'
      LOCAL_PATH: 'build'
# install aws cli and run glue commands
- curl "" -o "" && unzip
- echo 'c778f4cc55877833679fdd4ae9c94c07d0ac3794d0193da3f18cb14713af615f' | sha256sum -c - && ./aws/install
   - aws glue create-job ... your params ...


Best regards,
Oleksandr Kyrdan


 Thank you for answering to my questions could you please how can i explain me how to achieve this..


For suppose already some glue jobs are already working, in the bitbucket repositories some new code is there which is related to the job, will the pipeline works for to check the list of jobs whether the new was existed or not, if the job was not existed, will it able to create new glue job..

Parallely  will it able update the  code present in hdfs location and same thing like i explained in the above case for glue job is it will able to create new emr job which has to read the code from hdfs location.

Same case for databricks jobs as well.. can you help me how to figure this?

Suggest an answer

Log in or Sign up to answer

Atlassian Community Events