In our pipeline, we build a container and push that to AWS ECR. Recently we expanded our tests in a way that requires them to be run inside the container otherwise they won't work. What I'm trying to figure out is how to set up a pipeline build step that will build and then run a test container and get the results from the test container in such a way that if the test fails the pipeline fails.
If someone has an example they could point me to that would be awesome.
We have a similar process, we build the container and run the test inside the container.
We use these lines for this:
- export IMAGE_NAME=gcr.io/example:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME
- docker run --add-host host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL -e DATABASE_URL='postgis://test:test@host.docker.internal/app_db' $IMAGE_NAME /bin/bash -c "python manage.py check && pylint -j 4 --load-plugins pylint_django apps/ -E && pytest && coverage html"
I can send all the code if you need,
I think I get how you are doing it. Thanks. I'll let you know once I get a chance to run a few tests.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.