Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,456,305
Community Members
 
Community Events
176
Community Groups

How to access a node server that is started on first step of pipeline, in second step?

What I want to do:

I want to start a backend node.js server in step 1 of my pipeline (port 5000); and from the second step of the pipeline, I want to call one of its public endpoint.


The scenario:

I have two repos, one for frontend, and the other for backend. The yml file is present in the frontend repo and I have made all configurations in bitbucket to provide my frontend pipeline the access to clone my backend repo, so all good on that. The server also starts fine on the first step:
bb-community-3.png

My YML:

bb-community-2.png


The issue

In the second step, when I curl GET the public endpoint e.g: http://localhost:5000/search, I get the error:

curl: (7) Failed to connect to localhost port 5000: Connection refused

Screenshot:

bb-community-1.png
 

I think this is because when we get to step 2 the build in step 1 is already torn down so the server is not available? 

How can I access the endpoint in second step so that the curl request passes?


Thanks for any help, much appreciated.

2 answers

1 accepted

1 vote
Answer accepted

Hi @Shreejan Regmi and welcome to the community!

Pipelines builds run in Docker containers. For each step of your build, a Docker container starts, the repo is cloned in that container, and then the commands from the step's script are executed. When the commands finish successfully or if a certain command fails, the Docker container gets destroyed.

What you are asking is not possible because the Docker container from the first step no longer exists when the second step runs. You will need to start the server in the same step where you want to use it.

Please also keep in mind that if you clone a different repo in the first step, that clone will not be available in the second step. You can make use of artifacts for any files that are generated or downloaded during a step if you want them to become available in subsequent steps:

If you have any other questions, please feel free to reach out.

Kind regards,
Theodora

Hi Theodora, thanks for your help, starting both in the same step did work :) 

I am wondering though... if the backend server and frontend app needs two different versions of node to install and run, how can I manage that in the same step? Do I use a version manager such as nvm or volta, or do you suggest any other proper way? 


Hi Shreejan,

Thank you for the update, it's good to hear that this worked!

I am wondering though... if the backend server and frontend app needs two different versions of node to install and run, how can I manage that in the same step? Do I use a version manager such as nvm or volta, or do you suggest any other proper way?

If the tests you do for the backend are independent from the tests for the frontend, you could use a version manager. Another option would be to test the backend in one step and the frontend in a different step, and use a different Docker image as a build container for each step in your bitbucket-pipelines.yml file (with each image using a different node version).

Kind regards,
Theodora

2 votes

Hey @Shreejan Regmi

Like you mentioned, the process might be killed at this point, and further-more, the second process might not be running on the same host/container.

The safer approach would to either run the two servers in the same step or run the servers on a dedicated runner which can host the servers.

Note: the second approach is far less recommended because it would make life very complicated when trying to run your builds in parallel

Thanks Erez! It worked when I started both of the servers in the same step. :) 

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PERMISSIONS LEVEL
Site Admin
TAGS

Atlassian Community Events