Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,465,742
Community Members
 
Community Events
176
Community Groups

Help me understand Bitbucket pipeline services

I was trying to get PHP composer to work as a pipeline service, using the official composer image. However, when I run the pipeline I get a `composer: not found`, which leads me to believe that I don't understand how pipeline services work. I had expected to be able to run composer commands on the repository code, much like I can run docker commands with the docker service.

  • Why is that not the case?
  • How would the composer image have to be built to work as a service? Is it a question of the default command? Or is the BITBUCKET_CLONE_DIR not available to the service?

In this concrete case I could simply install composer directly into the pipeline image, but I would like to better understand how services work for future reference.

Here is the relevant pipeline snippet:

definitions:
services:
composer:
image: composer:2.5.1

steps:
- step: &test
# Expects that the registry values be set as repository/workspace variables
name: Tests & linting
image:
name: <our-registry-url>/bitbucket/php-xdebug-xsl:latest
username: $REGISTRY_USER
password: $REGISTRY_PWD
email: $REGISTRY_EMAIL
runs-on:
- self-hosted
script:
- composer install
- XDEBUG_MODE=coverage php bin/phpunit
- php ./tests/check-coverage.php
- ./vendor/bin/phpcs -v --standard=PSR12 --ignore='Kernel.php, bootstrap.php' ./src ./tests
- ./vendor/bin/php-cs-fixer fix -n -vvv
caches:
- composer
services:
- compose

Thanks
Chris

1 answer

1 accepted

1 vote
Answer accepted

Hi Christof,

I see that you are using the label self-hosted in your yml file. Can you please confirm if you are using Linux Docker Runners?

Please note that the following applies to Pipelines that run in our own infrastructure and pipelines that run in Linux Docker runners:

Pipelines builds run in Docker containers. For every step of your bitbucket-pipelines.yml file, a Docker container starts (the build container) based on the image you have defined in the yml file, the repo is cloned in that container and then the commands from the step's script are executed. When the step finishes, the container gets destroyed.

If you use a service for a step in your bitbucket-pipelines.yml file, Bitbucket starts another Docker container for the service that shares a network adapter with your build container. Service containers are usually used if you want e.g. a database service and they'll be available on localhost using their default port.

You can read more about services here, however, I don't believe that you need a service for composer. If composer is not found, it means that it is not installed in your build container.

If you're using a custom Docker image that you own as a build container (I see you have defined <our-registry-url>/bitbucket/php-xdebug-xsl:latest), you can install composer on the Docker image so you can use it during your builds.

Alternatively, you can install composer on the build container during the build, by including the necessary commands in your bitbucket-pipelines.yml file's step. This means that the installation will take place every time the build runs (as you'll have a new container).


In case you are using a self-hosted runner for Linux Shell, Windows or MacOS, these runners don't use Docker containers and the builds run directly on the host machine. In these cases, composer needs to be installed on the machine with the runner. These types of runners don't support service containers.

Kind regards,
Theodora

Hi Theodora,

Thanks for taking the time to answer me!

Can you please confirm if you are using Linux Docker Runners?

We are indeed using the Linux Docker runner

If you're using a custom Docker image that you own as a build container (I see you have defined <our-registry-url>/bitbucket/php-xdebug-xsl:latest), you can install composer on the Docker image so you can use it during your builds.

Yes, I am aware of that option and that is what I ended up doing. I was just wondering whether it is possible to use an existing external container as a service instead of installing it in the base image used to run the step.

The reason why I assumed this was possible with services is the 'docker' service that is available for Bitbucket pipelines. In that case we can use docker commands on the files in our repo, even though docker runs in its separate service container.

That lead me to assume that other services could be set up in a similar way - in this case composer, in another case I was hoping to run Selenium tests with Selenium (and potentially Chrome) as a service rather than installing Selenium/Chrome in the base image. Chrome alone blows up the size of the base image by 700+ MB.

What I am taking away from your answer, and other posts I have come across, is that this is not easily possible. I assume that the behavior of the docker service is an exception due to the particularity of the docker-in-docker image.

I guess an alternative would be to put a docker-compose setup into the repo and run it in the pipeline step, mounting the repo as a volume to all necessary containers. However this has a few disadvantages, most notably that then all containers used in the docker-compose setup would need to be pulled each time the step is run, while installing applications in the base image or using them as a service allows us to re-use downloaded images on the runner host.

Best regards,
Chris

Patrik S Atlassian Team Jan 24, 2023

Hello @Christof Koegler _Gaims GmbH_ ,

As you mentioned, the docker service indeed runs in a separate container, like any other service, but it has some peculiarities. 

When starting a step that uses a docker service, pipelines will automatically mount the docker cli executable inside the build container. This allows you to use the docker commands even though the image you are using doesn't have docker.

At the same time, pipelines spin up the docker service container, where the docker daemon (server) runs. The docker CLI from the build container connects to the docker daemon of the service container using a TCP socket. So when you execute a docker command in your build, it passes the command to the docker service through the network, and the service is the container that will actually run the command.

Extending that to your use case,  in order to use composer as a service, composer would have to provide a way of using the same mechanism of CLI/Server. You would need a composer executable in the build container that would connect over the network adapter to the service container in a specific port. At the same time, the service container would be running the composer "server" and would be listening for requests from the network in the same port.

If composer or any other application that you want to run does not offer that CLI/Server mechanism, you would indeed need to have it as part of the image or install it during the build.

Thank you, @Christof Koegler _Gaims GmbH_ !

Patrik S

Hi Patrik,

Thanks for taking the time to answer me. That makes things a lot clearer to me.

Best regards,

Chris

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PERMISSIONS LEVEL
Site Admin
TAGS

Atlassian Community Events