Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Issues grabbing source from Bitbucket within AWS CodePipeline

Edited

When trying to Release a change from AWS CodePipeline (we have an existing Bitbucket integration), we intermittently get the following error message below.

"Action execution failed

[Bitbucket] Bitbucket returned an Internal Error exception. Please retry shortly. If the issue continues it may be reported on the Bitbucket status page at https://bitbucket.status.atlassian.com/"
Usually retrying it gets past this issue and continues normally, but starting this afternoon around 1:30PM CST, we've been consistently receiving this message, even with multiple retries and on all of our other pipelines. We're not seeing an outage reported on https://bitbucket.status.atlassian.com/
Here's the image pop-up we get for Action execution failed when we try to deploy our changes:

Screen Shot 2022-01-12 at 2.14.34 PM.png

12 answers

1 accepted

5 votes
Answer accepted
Mark C
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Jan 12, 2022

Hi Everyone,

We received an update from our engineering team that the issue which is affecting the AWS CodePipeline integration feature is now resolved.

You can further check this incident here: https://bitbucket.status.atlassian.com/incidents/bckkpd21xrgg

We apologize for the inconvenience this has caused you and your team.
If you need more help, please do let us know.

Regards,
Mark C

It is happening again now. Haven't been able to pull from Bitbucket for some time now.

Like Andrew likes this

I can verify that when I change the repository settings to "Full Clone" it starts working again.

Same, happening again!

Like Andrew likes this

Hi @Mark C , seeing the same issue on my end. I just setup a CodePipeline with BitBucket source. The Source stage worked fine on first launch right after creating pipeline. Once I committed a new change in BitBucket, Source stage reran and failed. Reverted to original source code and same issue. I think its related to the webhook and will troubleshoot that next. Just want to confirm issue is still present.

 

Edit 1:

There's a few url redirects I've noticed when accessing target repo from project level in bitbucket webapp. It adds a redundant '/src/main/' to path. Not sure if this is new or has always existed but doesn't seem right.

Also, when I access the repo publicly with the https clone address provided, there's a pop up about credentials no longer needed (click OK) and lands me on what should be the repo address BUT without the redundant '/src/main/'. Reloading this page presents same error but once I add a '/' to end of url, shazam. It redirects through the redundant URL and correctly displays the repo.

Hope this helps. Hoping a fix is coming soon.

Hi, I'm experiencing the same issue just today. Please let me know if we have a way to resolve this on our end. 

Just to be clear we did not change anything in our code pipeline configuration. The  error bellow just suddenly popped out 

[Bitbucket] Bitbucket returned an Internal Error exception. Please retry shortly. If the issue continues it may be reported on the Bitbucket status page at https://bitbucket.status.atlassian.com/

Same issue for me

Had the same issue since Jan 12, 2022 6:00 PM (UTC±0:00)

i am getting this since yesterday any clue (codepipeline source stage ) 

Did you ever solve this?

yes issue was with repo size its reached at 2.0gb

Thanks. I didn't think there would actually be a reply since no one seems to reply here. I was wondering if that was the issue. Ours just went over 2.0gb.

So you just force developers to keep the repository < 2.0gb? I agree 2.0gb is kind of large, but seems quite arbitrary. 

Just to help out the next person to google this, we ran into the same issue as everyone in this thread.

When a repo hits 2GB in size, AWS CodePipeLine will fail to pull the repository with basically no error information, just "[Bitbucket]". I suspect Bitbucket does not provide an error and just throws a 500 and thus that error in CodePipeLine (CPL).

In our case we had someone commit some binaries that shouldn't have been in the repo. Unfortunately, we didn't realize this was a critical issue until many many commits later when a new feature branch was created, and that caused the repository size to increase (presumably increasing in the size of the binaries in one fell swoop, about 500MB). Since we had three active branches, those ~500MB added up to 1.5GB and ultimately pushed us over 2GB and deploys started to fail at the source step in CodePipeline.

Once we removed the files from all branches, using pointers from [1], the repository was still huge, in fact it was larger than when we started the process. We even removed the latest feature branch that we could easily re-create, and still the size was large. We also tried creating another branch and removing it to see if Garbage Collection would be forced on the remote repository, but it was not. We had to submit a support request and wait for support to run the GC, which immediately fixed the issue.

I am disappointed that this "soft" limit of 2GB breaks CPL without a useful error on either the AWS or BB side. I feel this should be documented more strongly that repositories cannot exceed 2GB; saying it's a soft limit seems incorrect if certain functionality is limited. Perhaps this is a problem on the AWS side, I'm not sure. Anyway hopefully this post helps someone and saves them 4 hours of troubleshooting that I spent trying to figure out what the heck happened, only to find out we hit the soft limit and it broke everything.

[1] https://support.atlassian.com/bitbucket-cloud/docs/maintain-a-git-repository/

wnisar
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
Feb 03, 2024

Thanks for the info @Nick Webb . That's exactly the case for us. We are getting the same error on the repo which is over 2Gb. But it works fine with other smaller repos.

Have the same issue, which started a few hours ago. Full clone is not an acceptable workaround for me (using some codepipeline feature which does not work with full clone)

I was able to resolve this by ensuring the code pipeline source stage does a FullClone rather than the default "CodePipeline default"

Thanks, this fix the issue !

Like Frank DiRocco likes this

Was your repo over 2GB? I tried doing this, but it wasn't really an option for our CodePipeLine.

Having the same issue

I don't know if this will help the attlassian devs to debug this issue but codepipeline is using this URL to get the src for the branch https://bitbucket.org/org/repo/src/branch-name and that results into "

"Resource not found

if you have your branch naming convention as feature/branch then bitbucket fails to redirect to URL like so 

https://bitbucket.org/org/repo/src/3e823120cc75e4340c6b8c78e08c1e51484/?at=feature/my-branch-name

Hi, how do you know that? tks

Like Arturo Hurtado Romero likes this

Same issue here!!!

As of 16 January 2022, this bug still occur. Thanks to Frank DiRocco's advice, I solved it by setting code_build_clone_output as True in my CodeStar Connection.

Like this:

aws_codepipeline_actions.CodeStarConnectionsSourceAction(
action_name='BitBucketSource',
owner="owner",
repo="repo",
connection_arn="arn",
code_build_clone_output=True, # Bitbucket fix
output=source_code
)

We experience this issue today for all our branches with aws pipeline

Problem solve! nice!

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events