Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Terraform AWS provider times out when executed via pipe: docker:// in Bitbucket Pipelines

Emre Polat
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
March 2, 2026

We are experiencing consistent Terraform AWS provider startup timeouts when running our Docker image via `pipe: docker://...` in Bitbucket Pipelines.

The same Docker image and entrypoint script work correctly when used directly as the step image (using `image: <custom-image>` and executing `./entrypoint.sh`). However, when executed through `pipe: docker://<custom-image>`, Terraform fails during provider initialization with the following error:

provider: configuring client automatic mTLS
vertex "provider[\"registry.terraform.io/hashicorp/aws\"].<alias>" error: timeout while waiting for plugin to start

Environment details:

- Bitbucket Pipelines (cloud)
- Execution via `pipe: docker://<custom-image>`
- Terraform 1.7.5
- AWS provider 5.x
- OIDC authentication enabled (`oidc: true`)
- Multiple AWS provider aliases
- Running: terraform plan -refresh-only -detailed-exitcode

Important observation:

When the exact same container and script are executed as the step image (not using `pipe:`), Terraform runs successfully without any provider timeouts.

This suggests the issue may be related to the Docker-in-Docker execution model used by Bitbucket Pipes, potentially affecting Terraform provider subprocess startup (gRPC + mTLS handshake) inside the nested container runtime.

We would appreciate clarification whether this is a known limitation of the Bitbucket Pipe runtime environment and whether there are recommended best practices for running Terraform workloads with multiple provider aliases via `pipe: docker://`.

0 answers

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
PREMIUM
TAGS
AUG Leaders

Atlassian Community Events