Have multiple jobs running in our bamboo instance that show no available agent when running on a branch, but if ran from master agents can run the job.
-There are no additional requirements of the branch job that are not shared by master
-The jobs shows as being able to be ran by three of our agents in the agent matrix
-The agents are not dedicated
-I have seen this happen for multiple builds and branches inside the same plan
-This does not occur for all of our plans
Any help would be appreciated!
Hello @Stefan,
Welcome to Atlassian Community!
It can be the case of Enhanced Plan Branches (Specs Branches) being used.
Specs branches will allow branches to have a distinct configuration from the default branch used by the plan.
If you are not using Specs branches, can you send a screenshot of what you see in Bamboo when running the Plan branch?
Kind regards,
Eduardo Alvarenga
Atlassian Support APAC
--please don't forget to Accept the answer if the reply is helpful--
We are using Specs branches, so this is possible but even looking at the spec configuration, it has no additional requirements that would prevent our remote agents from building. I don't see a way to define the requirements for a branch separately, is this possible or is there a better way to solve this?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Stefan,
Would it be possible to share your specs code? You can mask any secrets before sending them.
Kind regards,
Eduardo Alvarenga
Atlassian Support APAC
--please don't forget to Accept the answer if the reply is helpful--
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Here is the bamboo.yaml file
---
version: 2
plan:
project-key: Soft
key: buildBUILD
name: build Build (Spec)
stages:
- Build Job1:
manual: false
final: false
jobs:
- Job1 Flight Hw1
- JobSoft
- Job1 Virtual
- Job1 Flight
- Job1 Dev
- Build Hw3 Artifacts:
manual: false
final: false
jobs:
- Job1 Hw3 Artifact Deliverable
- Build VxWorks:
manual: false
final: false
jobs:
- VxWorks Hw2 Mono Image
- VxWorks Hw3 Mono Image
- VxWorks Hw4 Mono Image
Job1 Flight Hardware1:
key: JOB4
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- |
#!/bin/bash
set -ex
if ! docker image inspect $(grep -oP 'image: \K[^\r\n]+' Docker/Job1_flight_od_no_sb/docker-compose.yaml) > /dev/null; then
# If the flight docker image needs to be built we have to copy the vxworks
# install into a path under the docker context
rm -rf Docker/Job1_flight/support/vxinstall
cp -r /home/dnvrsv/vxworks Docker/Job1_flight/support/vxinstall
cd Docker/Job1_flight && docker-compose build
# Cleanup
rm -rf Docker/Job1_flight/support/vxinstall
cd ../../
fi
./clean.sh Job1_flight_od_no_sb
./build.sh Job1_flight_od_no_sb
# Copy the ReleaseFlight_od_Nosb.tar.gz to a file named ReleaseFlightHw1.tar.gz to make the artifacts consistent for the Hw1
cp Execute/ReleaseFlight_od_Nosb.tar.gz Execute/ReleaseFlightHw1.tar.gz
# Package up libraries required for the Monolithic image build. Copy out of image
cd Docker/Job1_flight_od_no_sb
docker-compose run --rm build-Job1_flight_od_no_sb /bin/bash -c "cd /work && tar cvfz Job1_lib.tar.gz Job1/lib/Hw3_vxWorks.bs/ bsp/CommonSrc/hwConfigMBR_cs/lib/ && cp Job1_lib.tar.gz /src"
artifacts:
- name: Job1 Flight od No sb Release Tarball
location: Execute
pattern: ReleaseFlight_od_Nosb.tar.gz
shared: true
required: true
- name: Job1 Flight od No sb Build Libraries
pattern: Job1_lib.tar.gz
shared: true
required: true
- name: Job1 Flight Hw1 Release Tarball
location: Execute
pattern: ReleaseFlightHw1.tar.gz
shared: true
required: true
- name: Job1 Flight Hw1 Build Libraries
pattern: Job1_lib.tar.gz
shared: true
required: true
requirements:
- system.docker.executable
- VxWorksCompiler
artifact-subscriptions: []
JobSoft:
key: JOB5
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- |
#!/bin/bash
# Build the Sat-Sim Image
./build.sh Job_soft
if [ "${bamboo.planRepository.branchName}" == "master" ];
then
docker tag (IP address):9500/repo/Job_soft:local (IP address):9500/repo/Job_soft:latest
docker push (IP address):9500/repo/Job_soft:latest
fi
# Build the TCP Wire Bridge Executable
cd sat_Hw3_env/Src/App
# Ensure the environment matches the Pipfile.lock
pipenv install --deploy
# Run the command to create a one file executable of the TCP Wire Bridge
pipenv run pyinstaller --onefile --distpath ./dist Communication/Protocols/TCP_Wire_Bridge.py
# Rename the file to Wire_Bridge to match Hw3 naming convention
mv dist/TCP_Wire_Bridge dist/Wire_Bridge
artifacts:
- name: JobSoft Release Tarball
location: Execute
pattern: docker-JobSoft.tar.gz
shared: true
required: true
- name: TCP Bridge Executable
location: env/Src/App/dist
pattern: Wire_Bridge
shared: true
required: true
requirements:
- system.docker.executable
- pipenv
artifact-subscriptions: []
Job1 Virtual :
key: JOB2
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- |
#!/bin/bash
./clean.sh Job1_virtual_
./build.sh Job1_virtual_
if [ "${bamboo.planRepository.branchName}" == "master" ];
then
docker tag (IP address):9500/repo/Job1_virtual_:local (IP address):9500/repo/Job1_virtual_:latest
docker push (IP address):9500/repo/Job1_virtual_:latest
docker tag virtual_:local virtual_:latest
docker push virtual_:latest
fi
if [ ! -z "${bamboo_jira_version}" ];
then
rm Deliveries/docker-virtual__local.tar.gz
docker tag (IP address):9500/repo/Job1_virtual_:local (IP address):9500/repo/Job1_virtual_:${bamboo_jira_version}
docker push (IP address):9500/repo/Job1_virtual_:${bamboo_jira_version}
docker tag virtual_:local virtual_:${bamboo_jira_version}
docker push virtual_:${bamboo_jira_version}
docker image save -o Deliveries/docker-virtual__${bamboo_jira_version}.tar.gz virtual_:${bamboo_jira_version}
fi
artifacts:
- name: Job1 Virtual Release Tarball
location: Execute
pattern: ReleaseVirtual.tar.gz
shared: true
required: true
- name: Job1 Virtual Docker image (For comp)
location: Execute
pattern: docker-Job1_virtual__local.tar.gz
shared: true
required: true
- name: Job1 Virtual Docker image
location: Deliveries
pattern: docker-virtual__*.tar.gz
shared: true
required: true
- name: Job1 Virtual Command Telemetry Database
location: Execute/Target/Command_Telemetry_CSVs/Virtual/cmd_tlm_db
pattern: Fsw*.csv
shared: true
required: true
requirements:
- system.docker.executable
artifact-subscriptions: []
Job1 Flight:
key: JOB3
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- "#!/bin/bash\nset -ex\nif ! docker image inspect $(grep -oP 'image: \\K[^\\r\\n]+' Docker/Job1_flight/docker-compose.yaml) > /dev/null; then \n # If the flight docker image needs to be built we have to copy the vxworks\n # install into a path under the docker context\n rm -rf Docker/Job1_flight/support/vxinstall\n cp -r /home/dnvrsv/vxworks Docker/Job1_flight/support/vxinstall\n cd Docker/Job1_flight && docker-compose build\n # Cleanup\n rm -rf Docker/Job1_flight/support/vxinstall\n cd ../../\nfi\n\n./clean.sh Job1_flight\n./build.sh Job1_flight\n\n# Package up libraries required for the Monolithic image build. Copy out of image\ncd Docker/Job1_flight\ndocker-compose run --rm build-Job1_flight /bin/bash -c \"cd /work && tar cvfz Job1_lib.tar.gz Job1/lib/comp_440_vxWorks.bs/ bsp/CommonSrc/hwConfigMBR_cs/lib/ && cp Job1_lib.tar.gz /src\"\n\nif [ \"${bamboo.planRepository.branchName}\" == \"master\" ];\nthen\ndocker-compose push\nfi\n"
artifacts:
- name: Job1 Flight Release Tarball
location: Execute
pattern: ReleaseFlight.tar.gz
shared: true
required: true
- name: Job1 Flight Build Libraries
pattern: Job1_lib.tar.gz
shared: true
required: true
requirements:
- system.docker.executable
- VxWorksCompiler
artifact-subscriptions: []
Job1 Dev:
key: JOB1
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- |
#!/bin/bash
./clean.sh Job1_dev
./build.sh Job1_dev
if [ "${bamboo.planRepository.branchName}" == "master" ];
then
docker tag (IP address):9500/repo/Job1_dev:local (IP address):9500/repo/Job1_dev:latest
docker push (IP address):9500/repo/Job1_dev:latest
docker tag Job1_linux_:local Job1_linux_:latest
docker push Job1_linux_:latest
cd Docker/Job1_dev
docker-compose push
cd ../../
fi
if [ ! -z "${bamboo_jira_version}" ];
then
# Tag and push the release version of the Job1 Dev Docker Image
docker tag (IP address):9500/repo/Job1_dev:local (IP address):9500/repo/Job1_dev:${bamboo_jira_version}
docker push (IP address):9500/repo/Job1_dev:${bamboo_jira_version}
# Tag and push the release version of the Job1 Linux Docker Image
docker tag Job1_linux_:local Job1_linux_:${bamboo_jira_version}
docker push Job1_linux_:${bamboo_jira_version}
# Save the Docker Image for the Job1 Linux
docker image save -o Deliveries/docker-Job1_linux__release_${bamboo_jira_version}.tar.gz Job1_linux_:${bamboo_jira_version}
# Delete the `_local.tar.gz` file since we don't need it for Release
rm Deliveries/docker-Job1_linux__local.tar.gz
fi
artifacts:
- name: Job1 Dev Release Tarball
location: Execute
pattern: ReleaseDevelopment.tar.gz
shared: true
required: true
- name: Job1 Dev Docker image
location: Execute
pattern: docker-Job1_dev_local.tar.gz
shared: true
required: true
- name: Job1 Linux Docker image
location: Deliveries
pattern: docker-Job1_linux__*.tar.gz
shared: true
required: true
- name: Job1 Linux Command Telemetry Database
location: Execute/Target/Command_Telemetry_CSVs/Development/cmd_tlm_db
pattern: Fsw*.csv
shared: true
required: true
requirements:
- system.docker.executable
artifact-subscriptions: []
Job1 Hw3 Artifact Deliverable:
key: JOB6
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- artifact-download:
artifacts:
- destination: Deliveries/virtual__for_Hw1/cmd_tlm_db
name: Job1 Virtual Command Telemetry Database
- destination: Deliveries/images
name: Job1 Virtual Docker image
- destination: Deliveries/wire_bridge
name: TCP Wire Bridge Executable
- destination: Deliveries/Job1_linux_/cmd_tlm_db
name: Job1 Linux Command Telemetry Database
- destination: Deliveries/images
name: Job1 Linux Docker image
conditions:
- variable:
equals:
planKey: Soft-buildBUILD
- artifact-download:
artifacts:
- destination: Deliveries/virtual__for_Hw1/cmd_tlm_db
name: Job1 Virtual Command Telemetry Database
- destination: Deliveries/images
name: Job1 Virtual Docker image
- destination: Deliveries/wire_bridge
name: TCP Wire Bridge Executable
- destination: Deliveries/Job1_linux_/cmd_tlm_db
name: Job1 Linux Command Telemetry Database
- destination: Deliveries/images
name: Job1 Linux Docker image
source-plan: Soft-buildRELEASE
conditions:
- variable:
matches:
planKey: (Soft-buildRELEASE)\w+
- script:
interpreter: SHELL
scripts:
- |
#!/bin/bash
# Create a tarball with the artifacts needed for delivery to the customer
if [ ! -z "${bamboo_jira_version}" ];
then
# If this is for a release, we will move all the files to a directory with the name of
# the release
mv Deliveries ${bamboo_jira_version}
tar -czvf Job1_linux__${bamboo_jira_version}.tar.gz ${bamboo_jira_version}/
else
tar -czvf Job1_linux__local.tar.gz Deliveries/
fi
artifacts:
- name: Job1 Linux Tarball
location: .
pattern: Job1_linux_*.tar.gz
shared: true
required: true
requirements:
- system.docker.executable
artifact-subscriptions:
- artifact: Job1 Dev Release Tarball
- artifact: Job1 Dev Docker image
- artifact: Job1 Linux Docker image
- artifact: Job1 Linux Command Telemetry Database
- artifact: Job1 Virtual Release Tarball
- artifact: Job1 Virtual Docker image (For comp)
- artifact: Job1 Virtual Docker image
- artifact: Job1 Virtual Command Telemetry Database
- artifact: Job1 Flight Release Tarball
- artifact: Job1 Flight Build Libraries
- artifact: Job1 Flight od No sb Release Tarball
- artifact: Job1 Flight od No sb Build Libraries
- artifact: Job1 Flight Hw1 Release Tarball
- artifact: Job1 Flight Hw1 Build Libraries
- artifact: JobSoft Release Tarball
- artifact: TCP Wire Bridge Executable
VxWorks Hw2 Mono Image:
key: JOB8
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- "#!/bin/bash\nset -ex\n\nrm -rf build_images\n\n./clean.sh vxworks_Hw2_mono\n\n# Extract libraries from the Job1 build step\ncd Docker/vxworks_Hw2_mono \ndocker-compose run --rm build-vxworks_Hw2_mono /bin/bash -c \"cd /work && tar xvf /src/Job1_lib.tar.gz\"\n\n# Extract Job1 tarball for ROMFS generation\ndocker-compose run --rm build-vxworks_Hw2_mono /bin/bash -c \\\n\"mkdir -p /work/Execute/ReleaseFlight && cd /work/Execute/ReleaseFlight && tar xvf /src/ReleaseFlight.tar.gz\"\n\n# Creating this file causes the mono build to skip a Job1 build. We need to use the Job1 build artifacts\n# from the Job1 flight job.\ndocker-compose run --rm build-vxworks_Hw2_mono /bin/bash -c \"touch /work/Job1_PS/Workbench/Kernels/cs_Hw2_gSI/.skip_Job1_build\"\ncd ../../\n\n# Build the Monolithic vxworks images\n./build.sh vxworks_Hw2_mono_edu\n./build.sh vxworks_Hw2_mono_fu1\n./build.sh vxworks_Hw2_mono_fu2\n"
artifacts:
- name: VxWorks Hw2 Mono Images
location: build_images
pattern: '*'
shared: true
required: true
requirements:
- system.docker.executable
- VxWorksCompiler
artifact-subscriptions:
- artifact: Job1 Flight Release Tarball
- artifact: Job1 Flight Build Libraries
VxWorks Hw3 Mono Image:
key: JOB9
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- "#!/bin/bash\nset -ex\n\nrm -rf build_images\n\n./clean.sh vxworks_Hw3_mono\n\n# Extract libraries from the Job1 build step\ncd Docker/vxworks_Hw3_mono \ndocker-compose run --rm build-vxworks_Hw3_mono /bin/bash -c \"cd /work && tar xvf /src/Job1_lib.tar.gz\"\n\n# Extract Job1 tarball for ROMFS generation\ndocker-compose run --rm build-vxworks_Hw3_mono /bin/bash -c \\\n\"mkdir -p /work/Execute/ReleaseFlight_od_Nosb && cd /work/Execute/ReleaseFlight_od_Nosb && tar xvf /src/ReleaseFlight_od_Nosb.tar.gz\"\n\n# Creating this file causes the mono build to skip a Job1 build. We need to use the Job1 build artifacts\n# from the Job1 flight job.\ndocker-compose run --rm build-vxworks_Hw3_mono /bin/bash -c \"touch /work/Job1_PS/Workbench/Kernels/cs_Hw32_gSI/.skip_Job1_build\"\ncd ../../\n\n# Build the Monolithic vxworks image\n./build.sh vxworks_Hw3_mono\n"
artifacts:
- name: VxWorks Hw3 Mono Image
location: build_images
pattern: '*'
shared: true
required: true
requirements:
- system.docker.executable
- VxWorksCompiler
artifact-subscriptions:
- artifact: Job1 Flight od No sb Release Tarball
- artifact: Job1 Flight od No sb Build Libraries
VxWorks Hw4 Mono Image:
key: JOB7
other:
clean-working-dir: true
tasks:
- checkout:
force-clean-build: false
- script:
interpreter: SHELL
scripts:
- "#!/bin/bash\nset -ex\n\nrm -rf build_images\n\n./clean.sh vxworks_Hw4_mono\n\n# Extract libraries from the Job1 build step\ncd Docker/vxworks_Hw4_mono \ndocker-compose run --rm build-vxworks_Hw4_mono /bin/bash -c \"cd /work && tar xvf /src/Job1_lib.tar.gz\"\n\n# Extract Job1 tarball for ROMFS generation\ndocker-compose run --rm build-vxworks_Hw4_mono /bin/bash -c \\\n\"mkdir -p /work/Execute/ReleaseFlight && cd /work/Execute/ReleaseFlight && tar xvf /src/ReleaseFlight.tar.gz\"\n\n# Creating this file causes the mono build to skip a Job1 build. We need to use the Job1 build artifacts\n# from the Job1 flight job.\ndocker-compose run --rm build-vxworks_Hw4_mono /bin/bash -c \"touch /work/Job1_PS/Workbench/Kernels/cs_Hw2_gSI/.skip_Job1_build\"\ncd ../../\n\n# Build the Monolithic vxworks image\n./build.sh vxworks_Hw4_mono\n"
artifacts:
- name: VxWorks Hw4 Mono Image
location: build_images
pattern: '*'
shared: true
required: true
requirements:
- system.docker.executable
- VxWorksCompiler
artifact-subscriptions:
- artifact: Job1 Flight Release Tarball
- artifact: Job1 Flight Build Libraries
repositories:
- build:
scope: project
triggers:
- polling:
period: '150'
branches:
create:
for-pull-request:
accept-fork: false
delete:
after-deleted-days: 1
after-inactive-days: 30
link-to-jira: false
notifications:
- events:
- plan-failed
recipients:
- responsible
- watchers
labels: []
dependencies:
require-all-stages-passing: false
enabled-for-branches: true
block-strategy: none
plans: []
other:
concurrent-build-plugin: system-default
---
version: 2
plan:
key: soft-buildBUILD
plan-permissions:
- groups:
- Group1
- Group2
permissions:
- view
- build
- groups:
- Group3
permissions:
- view
...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Stefan
Thanks for that! I see you have some requirement groups:
requirements:
- system.docker.executable
- VxWorksCompiler
requirements:
- system.docker.executable
requirements:
- system.docker.executable
- pipenv
Have you compared the bamboo.yaml file from your default Branch in Bamboo with the other branches' configuration? Are they equivalent?
It could also be the case of an Agent that has a unique capability that your Jobs are requesting and is currently busy, dedicated to another Plan/Project/Job at the moment or even Offline.
Cheers,
Eduardo Alvarenga
Atlassian Support APAC
--please don't forget to Accept the answer if the reply is helpful--
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I apologize for taking so long to respond to this. All requirements match for both master and branch builds.
The only agents we use with unique rquiremnts is dedicated and not currently having issues with that one.
I created a non spec clone fo the builds and we are not seeing this issue with that one, we are open to going this route but were wondering if there is a way to configuration manage a non specs build, or take a snapshot of the setup when a branch is ran?
Do you know if there are any options for doing this without using a spec build?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.