Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Bitbucket base URL problem

Tom Francis June 17, 2021

Hi,

I have not seen any questions posted yet, so I hope I am doing this in the right place! I would open an issue, but I want to be sure it's an actual issue first so I chose to ask about it in here first.

I have configured Jira and Confluence using the Helm Charts and the base url always gets set correctly and works.

This week, I have moved on to Bitbucket and am having problems with the base url. I am using the latest version of the Helm Chart - 0.11.0

I have configured the ingress stanza in my values file as such (Comments removed for easier reading):

ingress:
create: true
nginx: true
maxBodySize: 1g
host: bitbucketpoc.company.com
path: "/"
annotations: {}
https: true
tlsSecretName:

I have confirmed that as a result of this, the following environment variables are present  inside the pod:

SETUP_BASEURL=https://bitbucketpoc.company.com
SERVER_PROXY_NAME=bitbucketpoc.company.com
SERVER_PROXY_PORT=443
SERVER_SECURE=true
SERVER_SCHEME=https 

According to the Docker image documentation, these are the env variables that need to be set when running behind a reverse proxy, as we are in the K8s environment (nginx-ingress being the reverse proxy).

 

When browsing the site after it comes up, we get the warning that the Base URL is not set correctly:
image.png

When looking in the settings, the Base URL does match the URL I am accessing Bitbucket from. This suggests that it's the proxy settings that are not set up correctly. Alas, all I can do to check this is to look at the environment, which seems OK. Adding a bitbucket.properties file and having the settings in there (as we do in our legacy environment) has no effect.

In addition, there are two other side-effects that I have noticed that I believe to be related.

  • When viewing commits, we get the following error.
    image.png
    This is accompanied by the following entry in the atlassian-bitbucket.log file.
    2021-06-18 01:34:44,769 WARN [http-nio-7990-exec-3] admin @1J95BJEx94x1114x4 4fv97i 10.100.1.87 "POST /rest/tags/latest/projects/TEST/repos/test/tags HTTP/1.1" c.a.p.r.c.s.j.XsrfResourceFilter Additional XSRF checks failed for request: http://bitbucketpoc.company.com/rest/tags/latest/projects/TEST/repos/test/tags , origin: https://bitbucketpoc.company.com , referrer: https://bitbucketpoc.company.com/projects/TEST/repos/test/commits , credentials in request: true , allowed via CORS: false
  • When viewing SCM commands, it's showing http:// and not https:// URLS.
    image.png
    This should be showing https:// URLS that match the Base URL.

Thanks,

Tom...

1 answer

1 accepted

Suggest an answer

Log in or Sign up to answer
0 votes
Answer accepted
Dylan Rathbone
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 17, 2021

Hi Tom, 

Thanks for reaching out. As a starter for ten can you confirm what platform your cluster is running on i.e AWS/Azure/bare metal? We're primarily developing and testing the charts on AWS. We try to keep things agnostic but we may have tripped up somewhere on this front particularly with Ingress. Can you also confirm whether you utilised the instructions for Ingress setup here:
https://github.com/atlassian-labs/data-center-helm-charts/blob/master/docs/examples/ingress/INGRESS_NGINX.md

Cheers,
Dylan.

Tom Francis June 17, 2021

Sorry, yes. Vital info there! I was meant to get into that when I explained more on our ingress, but I guess I got eager to click send instead :)

We are using EKS on AWS. It's pretty vanilla, currently on Version 1.19. Our ingress is set up such that our TLS is offloaded on the NLB.

Dylan Rathbone
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 17, 2021

All good Tom thumbs-up My initial thought on this is that the Ingress controller is not including the "X-Forwarded-Proto" header, meaning that BB doesnt know on what protocol to respond and so is defaulting to HTTP. We have some details on this topic here: https://github.com/atlassian-labs/data-center-helm-charts/blob/master/docs/CONFIGURATION.md#ingress

As a means of clarifying what headers are sent to the backend pods you can use wireshark...

Setup

  1. Install Krew (package manager for kubectl plugins) by following the instructions here

  2. Install the k8s plugin ksniff

    kubectl krew install sniff
  3. Install wireshark

    brew install --cask wireshark
  4. Copy the static-tcpdump binary to the target pod

    kubectl cp /Users/<username>/.krew/store/sniff/v1.6.0/static-tcpdump <pod-name>:/tmp/static-tcpdump

Start capturing TCP traffic by running:

  1. kubectl sniff <bitbucket pod> -f "port 7990"

This command will automatically open Wireshark which will begin traffic analysis of the pod.

Tom when logging into your BB service keep an eye out for the associated request and response (via wireshark), when you have them can you confirm what the headers are included in the request. If you dont see the protocol (HTTPS) it could be an issue with how we've documented the ingress setup.

Like Tom Francis likes this
Tom Francis June 17, 2021

That's some good info there, thanks for the speedy response! It gives me something to work on so thanks. It could well be the X-Forwarded-Proto header, so I will look into that.

I will work on this tomorrow and let you know my findings.

Thanks,

Tom...

Dylan Rathbone
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 17, 2021

Thanks Tom. Let me know when have your findings. 

Dylan.

Yevhen
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 17, 2021

Hm... it is possible that when TLS termination happens on NLB, requests go to Nginx are then passed to Bitbucket without x-forwarder-for or -proto headers. Can't confirm such a use case though. My most recent tests with EKS were with TLS termination on Nginx level not NLB.

Tom Francis June 21, 2021

Well thanks guys for setting me down the right path. It was indeed related to the X-Forwarded-Proto header.

Unfortunately, we are unable to install anything like wireshark as we are not able to run brew, but as I knew what I was looking for, I found another way.

I used a header echoing container (brndnmtthws/nginx-echo-headers) to show all of the request headers that were being received by the application. As suspected, the X-Forwarded-Proto header was not set. Well, it was set, but incorrectly to 'http'.

The bulk of the time since your last comments in here were me trying to fix this in a way that does not interfere with other applications in our clusters - Let me elaborate.

We are using aws-load-balancer annotations in our ingress so that it provisions an NLB in front of the cluster. As previously mentioned, this is where we are doing our TLS offloading. The backend connections are TCP (i.e. not TLS).

In this scenario, Nginx is seeing regular http traffic as far as it's concerned, and it is then setting the X-Forwarded-Proto header to http, as you would expect it to.

As the NLB is a layer 3 load balancer, it has no mechanism to provide any of the X-Forwarded-* headers in the backend requests.

There are two solutions to this:

  • Hard code the XFP header in the nginx-ingress config. A super ugly idea that will break if you try to send non-https traffic. All our traffic is https, so that's moot, but it's still an ugly hacky idea. Nope!
  • Use Proxy Protocol V2. Let's do this.

AWS NLB supports Proxy Protocol V2. Unfortunately it's not as simple as just turning it on as it is not backwards compatible and will break connectivity to the backend. If you turn this on, you need to also enable 'use-proxy-protocol' in your nginx config. It's all or nothing.

When you enable this on both ends of the connection, the nginx-ingress is then able to extract client connection information from the proxy protocol and set the XF* headers correctly.

Once I did this, my BitBucket problems were gone.

Most of my time was spent going back through our history books to determine why our clusters were set up this way. It does not make sense to have incorrect XF* headers set as it could break applications. I needed to be sure there was no good reason that we were doing what we were doing. I think it was just an oversight and we got lucky that no applications were affected by the bad headers. We will look to migrate our clusters to use Proxy Protocol V2.

There are two more points I would like to bring up:

  1. This problem did not occur in Jira or Confluence using the same Helm charts repo. They seems to be able to handle determining the client scheme without those headers being set. Maybe all links there are relative rather than generated as I don't see how else they could determine this without XF* being set.
  2. Although the docs clearly stated that the XFP header is required, it may be going into some side-detail about what's needed when running behind an AWS NLB with TLS offloading, and how ProxyProtocol should be used. If this can save someone else a few days of headache, then it can only be a good thing :)

Thanks again for your help in pointing me in the right direction on this.

 

Tom...

Like Dylan Rathbone likes this
Tom Francis June 26, 2021

Just an addition to this. Unrelated directly to BitBucket, but related to using an AWS NLB with EKS and using ProxyProtocol. It's not straight forward and we ran into problems with this.

When you enable Proxy Protocol, you need to enable it in the nginx-ingress (easy via ConfigMap) and you need to enable it on the NLB (Easy via Annotations). It all seems to work, except.... The NLB healthchecks do not work.

When you enable Proxy Protocol on the NLB, it also enables Proxy Protocol for the health checks.

The HealthChecks that are set up by the cloud controller point to an API endpoint on each node that is managed by K8s. This API does not support Proxy protocol, so the health checks fail. This makes this solution not possible.

In the current version of the controller (in-tree) that we are using, it's not possible to configure the health checks to point to the workload ip/port.

We will be experimenting with the AWS Load Balancer Controller to see if this can be resolved. Unfortunately that's also not straight forward in our environment. Of course :)

I will update this thread when I have tried with the AWS Load Balancer Controller to see if that fixes it.

Thanks,

Tom...

Dylan Rathbone
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 27, 2021

Hi Tom, 

Glad to hear you got to the bottom of your original issue, and thanks for passing on your comments. This is great feedback and we'll work on capturing it in our docs.

With regards the NLB and proxy protocol not playing nice with the health checks - admittedly I don't think we were aware of this. We typically dev and test with a classic load balancer (ELB) where the X-Forwarded-* headers are maintained.

I assume the failing health checks you are referring to are those performed by K8s (ingress liveness and readiness probes) to ensure the backend ingress pod(s) are alive? Whats interesting is that configuring an NLB with the proxy protocol is a documented subject, Im just wondering if there are some assumed assumptions in there on how to configure this scenario appropriately...

Please do keep us in the loop on your progress, it would be great to know your findings. 

TAGS
AUG Leaders

Atlassian Community Events