The dockerFile used to build our docker image was taken from this repo.
I'm running a team repo pipeline and it does pull the docker image and it does try to run our CMD .... command. This CMD command does node tests/test, which is an existing and valid folder/file path.
But I keep getting this error
I added a ls to the dockerFile CMD step to see how the directory looks like, and to my surprise instead of showing QE Repo directory (taken from docker image), it is showing team repo directory.
This is why I keep getting this error message, but to my understanding I should be retrieving dockerImage directory.
Anyone knows what am I missing???? We need to run CMD ls && npm run test-legacy within dockerImage directory.
Note 1: All the following commands run good locally
* npm run test-legacy
* node tests/test
* node -e 'require("./tests/test")'
Note 2: I have debug the docker image, and have confirmed that it has all QE Repo team directory inside of it.
Note 3: Docker run for our docker image works perfect locally.
It seems to be the same issue as I have, but I couldn't figure out how to solve mine yet. Any idea?
So the issue lies in how a pipeline runs a docker image. It sets the working directory to be inside of the repo your called your pipeline from it adds the argument --workdir=$(pwd).
So you need to change your directory you are running the commands from after the docker image has started up because I believe arguments added will overwrite what the docker file has set.
In this case we need to change your directory to where you set what your directory is usually when you run your tests
CMD cd /api-automation && ls && npm run test-legacy
Should set you directory to be /api-automation before it trys to run the test
Not really a bug just a "feature" of the way pipelines sets the working directory.
Because pipelines uses the docker run command with a command line argument of
It will overwrite anything in the dockerfile, which normally makes sense if you're running things from the command line as any arguments you set you'll typically want to overwrite the defaults on the dockerfile.
Definitely can lead to some weird behavior and I think a flag when you're calling pipes would be a nice feature to add if you want to disable it but it would also be hard to find the repo folder that you called the pipeline from if it wasn't set as the working directory.
Definitely wish it was a bit better documented that it was doing this but when you're working with a bunch of different systems together there is always going to be some weird things to figure out.
Hi everyone, The Cloud team recently announced 12 new DevOps features that help developers ship better code, faster ! While we’re all excited about the new improvements to Bitbucket ...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events