How can I use Pipelines with MyGet as a source for dotnet restore

Toby Russell
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
March 20, 2018

When I try to configure Bitbucket Pipelines to use our private MyGet feed as well as NuGet.org as multiple sources for the dotnet restore the build fails as it cannot find some dependencies on MyGet (as they are not hosted there).

How can I resolve this?

3 answers

0 votes
Jeremy Stafford
Contributor
October 29, 2018

@Toby Russell

You simply need to be more specific about your restore command. 

`dotnet restore` will run restore on a project or solution that is in that same directory and it will use your global nuget config unless there is a local config in the directory. That means your OS needs creds installed for your private server.

Since you're running inside of a container, you either need to install those as part of your script or use the pre-auth URL, which is what I do. MyGet wants you to use the config method e.g. `nuget sources add ...` but in my experience, it is a PITA and doesn't work half the time because of Linux nuget support sucking or `dotnet nuget` being equally as bad. MyGet also provides a preauth URL which seems to work pretty reliably.

 

For example:

dotnet restore src -s $NUGET_PREAUTH -s https://api.nuget.org/v3/index.json

 

What this does is, assuming a directory sturcture like this:

/
|-- .git
|-- kube/
| |-- dev.yaml
| |-- qa.yaml
| |__ prod.yaml
|__ src/
|-- YourProject.Host/
| |-- YourProject.Host.csproj
| |__ Dockerfile
|--YourProject.Lib/
|__ YourProject.sln

In my case, it runs `restore` against `src/*.sln` using a source `-s $NUGET_PREAUTH` which is a environment variable containing my pre-auth URL as well as the second `-s https://api.nuget.org/v3/index.json` which is the default nuget server. This will restore packages from both sources.

 

In reality, you need to do a little bit more to secure your preauth URL as an environment variable. Checkout my sample .yaml in this this thread for an example.

0 votes
Jeremy Stafford
Contributor
October 27, 2018

Ditto. One of the most annoying this about using cloud build services is dependency on private servers. I'm looking into pipelines right now (we're on bamboo) and we'll need to ensure we can communicate with GCP, GKE, GCR, and MyGet securely, at the very least. In bamboo, this is all installed on the host OS. Some official guidance on stuff like this would be valuable.

 

It's been 23 days since the last activity on this thread, can we get something? Links? Anything?

Graham Gatus
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
October 29, 2018

@Jeremy Stafford I've played around with setting up an extremely basic pipeline with dotnet , nuget and a basic installation of google cloud tools. I built a simple C# app based on https://docs.microsoft.com/en-us/nuget/quickstart/install-and-use-a-package-using-the-dotnet-cli, which pulled down a json related dependency from nuget, and ran the application.

The second step installs gcloud tools in the dotnet image following the instructions for Linux at https://cloud.google.com/sdk/docs/quickstart-linux. You still need to configure authentication, for which google have a tutorial: https://cloud.google.com/solutions/continuous-delivery-bitbucket-app-engine.

I've left out details of what you would do once gcloud tools are installed 

pipelines:
default:
- step:
name: Using dotnet docker image
image: microsoft/dotnet:sdk
script:
- dotnet restore --verbosity detailed
- dotnet run


- step:
name: Using dotnet with gcloud tools
image: microsoft/dotnet:sdk
script:
# Install gcloud suite - https://cloud.google.com/sdk/docs/quickstart-linux
- curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-222.0.0-linux-x86_64.tar.gz -o google-cloud-sdk-222.0.0-linux-x86_64.tar.gz
- tar zxvf google-cloud-sdk-222.0.0-linux-x86_64.tar.gz google-cloud-sdk
- ./google-cloud-sdk/install.sh --quiet
- source google-cloud-sdk/path.bash.inc

# configure auth, and deploy app to google

I'm not familiar with MyGet - is there a specific scenario you are trying to get working? If you need to store secure credentials somewhere (e.g to store a username/password needed to log in to MyGet, assuming its similar to an NPM or maven registry and allows private packages to be password protected), you can use environment variables to store this securely (https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html ). Similarly, any credentials required for gcloud can also be stored as variables.

Jeremy Stafford
Contributor
October 29, 2018

Hey thanks for getting back to me. In the meanwhile, I was able to answer a lot of these questions on my own by piecing together several conversations. I'll probably write a more complete blog about this at some point, but here are some of what I came across the other day while I was learning how to make this work for us:

TL;DR - It's all possible as long as it's Linux based.

MyGet's Auth system is a little wonky to where, in my experience, using the API key and/or username-pass methods don't work very well. Even when I use `nuget sources add` I get mixed results. I'm not sure what that's about. However, if you use the pre-auth URLs, everything seems to work fine... except for uploading symbols, where you get a 404. I'm not sure what that is, but I don't think it's in any way related to Pipelines. At any rate, just store those bits of info globally or locally and secure them where appropriate and use them in the script.

If you look at the build pipelines in the same way that you look at a full build server, you're going to have a bad time. Instead, focus more on steps. One of the things I immediately ran into was the question "How do I get a docker image to use as a build environment that has everything I need?" I mean you could easily create your own and then just have pipelines use it, but the down side is that unless Atlassian caches all of our custom images, builds are going to be really slow. As Graham demonstrated right here, you could just install your dependencies as part of the step, but that's still really slow. That gcloud install is not lite. Here's how I solved that:

  1. Use stages. microsoft/dotnet:sdk is cached so it'll spin up instantly, but it doesn't have things like gcloud on it. So instead, leverage artifacts and then have different steps use different images. For example
    1. Step 1: Publish dotnet artifacts. Build and output the artifacts needed to build a docker image and set them aside as artifacts. Use the dotnet image for this. 
    2. Step 2: In a different step, switch to a different cached image such as `google/cloud-sdk`. Be sure to request docker as a service in this step so that you have access to the daemon. Since your artifacts will be carried forward, you can use those to build your docker image. Since you have glcoud in this environment, log into GCP/GKE/GCR and push your image
    3. Step 3: Deploy. Use the `google/cloud-sdk` image again for this.

So in that example, you do what you can in one image, then switch to another. This is MUCH faster than using custom images or bootstrapping cached ones. There are some things that still need to be bootstrapped in, but hopefully those are small. I have a few examples in my config.

Here's an example of my config that shows all of these concepts:

image: microsoft/dotnet:sdk

.set_environment: &setenv
  export PROJECT_NAME=PipelinesTest.Host
  export DOCKER_IMAGE=gcr.io/myproject/pipelines-test:$BITBUCKET_COMMIT
  export DEPLOYMENT_NAME=pipelines-test

.gcloud_auth: &gcloudAuth |
  echo $GCP_CREDS_DEV | base64 --decode --ignore-garbage > ./gcloud-api-key.json
  gcloud auth activate-service-account --key-file gcloud-api-key.json
  gcloud auth configure-docker --quiet
  gcloud config set project $GCP_PROJECT
  gcloud container clusters get-credentials mycluster --region us-central1-a

pipelines:
  default:
    - step: &buildtest
        name: Build and Unit Test
        caches:
          - dotnetcore
        script:           
          - dotnet restore src -s $NUGET_SOURCE -s https://api.nuget.org/v3/index.json
          - dotnet build src -c Release -p:BuildNumber=$BITBUCKET_BUILD_NUMBER
          - dotnet tool install -g trx2junit
          - export PATH="$PATH:/root/.dotnet/tools"
          - |
                function convert_tests {
                  ls test-results/*.trx | xargs trx2junit
                  rm test-results/*.trx
                }
                trap convert_tests EXIT
          - dotnet vstest src/**/bin/Release/**/**.Tests.dll --logger:trx --ResultsDirectory:test-results
  branches:
    dev:
      - step: *buildtest
      - step: &pub
          name: Publish netcore artifacts
          artifacts:
            - src/PipelinesTest.Host/out/** 
          script:
            - *setenv
            - dotnet restore src -s $NUGET_SOURCE -s https://api.nuget.org/v3/index.json
            - dotnet publish src/$PROJECT_NAME/$PROJECT_NAME.csproj -c Release -o out
      - step: &artifacts
          name: Build and push docker image
          image: google/cloud-sdk:latest
          script:
            - *setenv            
            - *gcloudAuth
            - docker build -t $DOCKER_IMAGE src/$PROJECT_NAME/ 
            - docker push $DOCKER_IMAGE
          services:
            - docker
      - step:
          name: Deploy to dev
          deployment: test
          image: google/cloud-sdk:latest
          script:
            - *setenv
            - *gcloudAuth
            - kubectl apply -f kube/dev.yaml # this is where helm could be nice
            - kubectl set image -n dev deploy/$DEPLOYMENT_NAME $DEPLOYMENT_NAME=$DOCKER_IMAGE
    qa:
      - step: *buildtest
      - step: *pub
      - step: *artifacts
      - step:
          name: Deploy to QA
          deployment: staging
          image: google/cloud-sdk:latest
          script:
            - *setenv
            - *gcloudAuth
            - kubectl apply -f kube/qa.yaml # this is where helm could be nice
            - kubectl set image -n qa deploy/$DEPLOYMENT_NAME $DEPLOYMENT_NAME=$DOCKER_IMAGE
      - step:
          name: Deploy to Staging
          deployment: production
          trigger: manual
          image: google/cloud-sdk:latest
          script:
            - *setenv
            - *gcloudAuth
            - kubectl apply -f kube/prod.yaml # this is where helm could be nice
            - kubectl set image -n prod deploy/$DEPLOYMENT_NAME $DEPLOYMENT_NAME=$DOCKER_IMAGE

 

So in here, I demonstrate:

  • All branches get built and tested, but dev, qa, and prod have deployments which involve containerization.
  • In a dotnet environment, restore nuget from private and public servers, build and test. I use a dotnet tool to convert test output into junit since Pipelines doesn't understand trx. The installation of this tool takes about .25 seconds.
  • Publish the project to an output folder that is designated as an artifact
  • Switch to the gcloud container, build and push the image
  • Using the gcloud container, update the deployment

Also in this flow, I demonstrate:

  • Reusing scripts via anchors to reduce repetition
  • Pushes to dev get deployed to dev and cannot be promoted
  • Pushes to qa get deployed to qa and can be manually promoted to prod

Not shown

  • Pushing nuget packages. I would simply make this another script, probably in the publish artifact step.

 

So a great offering for pipelines (if it doesn't already exist) could be the ability to pay for a custom image cache so that in the cases where I absolutely need to use a custom image, I can cache it so that I don't have to wait for Pipelines to pull it down. 

Aside from that, the only downside I can see is the limitation of how many deployment environments there can be. I've gone through the mental exercise which tells me that keeping the deployment environments simple is a good idea. I was able to come up with auxiliary flows that are outside of pipelines, so this still works, but for how long, I'm not sure.

In terms of using pipelines in a microservice environment, I think there are still a lot of tools and hooks to be had, but that is a different discussion.

0 votes
Philip Hodder
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
April 5, 2018

Are you building a .NET Core application or a general dotnot application?

Bitbucket Pipelines currently only support .NET Core builds. This is due to our infrastructure only supporting Linux at the moment. We have a feature request to add Windows infrastructure support that you can follow here: https://bitbucket.org/site/master/issues/13452/support-for-native-windows-net-builds-on

Assuming you're using .NET Core: Since you have a private MyGet feed, you probably need to set up some credentials and URLs in order for your dependencies to be pulled into Pipelines. How did you set this up locally?

gamesguru
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
October 4, 2018

it is pure .NET core for me.  We are pulling our hair out, getting ready to ask our clients for source because including that directly is a more tangible solution than getting nuget working in pipelines.

 

Locally I can get the feed enabled quite easily.  Though Ubuntu differs from CoreOS in that it comes with an apt-ready nuget ppa, and the supporting /usr/bin/cli

This is the command I use on Ubuntu,

nuget sources add -name some_name -source website.json -username me@company.com -password hex_token

I am also able to run a dotnet restore even with our private nuget source disabled, suggesting the IDEs of MonoDevelop and VSCode are else-how informing the dotnet environment in a way which remains elusive over the command line

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events