Jira behind nginx proxy_cache

Does anyone have experience with *succesfully* running Jira behind an nginx reverse proxy, using nginx's proxy_cache? This should provide at least a moderate boost in performance if configured correctly.

I have just started experimenting with this, although I supsect I will find some problems with my current (very basic) configuration:

proxy_cache_path	/var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:50m max_size=50m inactive=1440m;
proxy_temp_path		/var/run/nginx-cache/tmp; 

server {
	server_name jira.example.com;

	location / {
		proxy_set_header	Host $host;
		proxy_set_header	X-Forwarded-Host $host;
		proxy_set_header	X-Forwarded-Server $host;
		proxy_set_header	X-Forwarded-For $proxy_add_x_forwarded_for; 
		proxy_set_header	X-Real-IP  $remote_addr;
		proxy_pass		http://127.0.0.1:8080;
		proxy_cache		nginx-cache;
		proxy_cache_key		"$scheme://$host$request_uri";
		proxy_cache_valid	1440m;
		proxy_cache_min_uses	1;
	}
}

I have the expire times set very high (24 hours) intentionally to try to experiment with how long I can cache stuff, and to make it easier for me to ascertain what downsides there may be to using the proxy_cache (at least in this configuration).

Mainly I suspect I should add some more config so that it does not try to cache any authentication or admin areas.

5 answers

1 accepted

I have now updated to the following, it bypasses the cache for at least the top level admin pages.

proxy_cache_path        /var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:50m max_size=50m inactive=1440m;
proxy_temp_path         /var/run/nginx-cache/tmp;

server {
        server_name jira.example.com;

        location / {
                proxy_set_header        Host $host;
                proxy_set_header        X-Forwarded-Host $host;
                proxy_set_header        X-Forwarded-Server $host;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Real-IP  $remote_addr;
                proxy_pass              http://127.0.0.1:8080;

                set $do_not_cache 0;

                if ($request_uri ~* ^(/secure/admin|/plugins|/secure/project)) {
                        set $do_not_cache 1;
                }

                proxy_cache             nginx-cache;
                proxy_cache_key         "$scheme://$host$request_uri";
                proxy_cache_bypass      $do_not_cache;
                proxy_cache_valid       1440m;
                proxy_cache_min_uses    1;
        }
}

If someone else has a better answer I would gladly accept that instead, but the site keeps bugging me to accept an answer so I'll accept my own, at least until a better one comes along... :(

Sorry to relive this work again, how did it work out? did JIRA perform faster? Do you have a delay in new issues appearing or changing statuses in the dashboard?

Cheers,

John G. (NZ)

I have noticed a boost in performance, it was not significant, but given how easy the nginx proxy cache is to implement (if you're already using nginx, at least) it is well worth it.

I have not noticed any delays with either new issues or statuses. I suspect I am not actually caching as much as I originally thought due to Jira using the Cache-Control: no-cache header - see Sergey's answer below, but without having run any real hard tests I would say it did boost responsiveness.

If I have time at some point I may experiment with making nginx ignore that and see how much caching we can realistically do, but for now I have just been running with the basic configuration above, and there haven't been any problems so far.

For what it's worth, while Jira and Confluence seem to work perfectly well with this proxy_cache configuration, Stash does not like it.

This did not work for me. For example the main project page stopped working after implementing this. It was raising error that there are too many redirects.

Nginx respects Cache-Control headers sent by the app, so it is safe to enable caching for top directory. Most of the app's responses have Cache-Control: no-cache, anyway -- only javascript files and images are cacheable.

You can also serve static files (f.e. images in atlassian-jira/images/) directly from nginx -- this is easier if it is co-located with the app.

(edited to add:)

Setting proxy_cache_lock on for /s reduces the pain of JRA-37337 (LESS compiler vs. plugin changes issue).

Certain non-cacheable responses are essentially static and could be frequent enough to warrant forced caching. Most of them are still user session-specific, so cache key must be altered to include $cookie_JSESSIONID (at least). Cache them at your own risk.

/osd.jsp -- OpenSearch metadata

/rest/api/2/filter/favourite -- related to JRA-36172. For some reason this response automatically includes a list of subscribers, which could be expensive to generate. However, the filter panel which issues this request has no use for this list.

/rest/menu/latest/appswitcher and /rest/nav-links-analytics-data/1.0 -- used by Application Navigator feature.

/rest/helptips/1.0/tips

/secure/projectavatar and /secure/useravatar

That's helpful to know, thanks.

Hi there,


There is a guide to use Atlassian tools with nginx described in this link:

https://mywushublog.com/2012/08/atlassian-tools-and-nginx/

This configuration is quite of simple, please give it a try if you still need this.

Also, please let me know how it goes for you.

Regards,

Celso Yoshioka

Unfortunately that guide does not have any information about using the nginx proxy_cache features, and so is not relevant to my question.

See trailing slashes in {{proxy_pass}} directives:

server
{
        listen 80;
        server_name jira.example.com;
        server_tokens off;
        root /home/jira/current;
        merge_slashes on;
        msie_padding on;
        location / {
                proxy_pass http://jira.example.com:8080/;
                proxy_set_header Host jira.example.com:80;
        }
}

server
{
        listen 80;
        server_name stash.example.com;
        server_tokens off;
        root /home/stash/current;
        merge_slashes on;
        msie_padding on;

        location / {
                proxy_pass http://stash.example.com:7990/;
                proxy_set_header Host stash.example.com:80;
        }
}

server
{
        listen 80;
        server_name crowd.example.com;
        server_tokens off;
        root /home/crowd/current;
        merge_slashes on;
        msie_padding on;
}

server
{
        listen 80;
        server_name wiki.example.com;
        server_tokens off;
        root /home/confluence/current;
        merge_slashes on;
        msie_padding on;

        location / {
                proxy_pass http://wiki.example.com:8090/;
                proxy_set_header Host wiki.example.com:80;
        }
}

server
{
        listen 80;
        server_name bamboo.example.com;
        server_tokens off;
        root /home/bamboo/current;
        merge_slashes on;
        msie_padding on;

        location / {
                proxy_pass http://bamboo.example.com:8085/;
                proxy_set_header Host bamboo.example.com:80;
        }
}

This configuration works for us perfectly.

Your answer does not address my question, which is specifically about using nginx proxy_cache.

Ah ok, my bad. I think I missed the point there :)

I have not used ngnix, but I did some quick analysis of the proxy cache in apache on Jira 6.0. I did not find it worthwhile because of the our actual usage pattern.

1) Issues assigned to only one to two people.
2) Once a developer found his issue to work on, he got to his issue directly, (ie avoiding the search issue page)
3) Agile boards was only realy being looked at/updated two to three times throughout the day per developer.
4) Updates from outside systems (source control) ran in bursts, so developers wanted to see the actual current status verses something cached.
5) During the scrum meeting there was some actual sharing, but the database cache was more important than the proxy cache.
6) If a qa developer was waiting for a status change, it was actually better to get it through email notification than activity streams.
7) There was some advantage for the dashboard page since the dashboard was tailored for each project.

Those are some useful insights, thanks for that!

Suggest an answer

Log in or Sign up to answer
How to earn badges on the Atlassian Community

How to earn badges on the Atlassian Community

Badges are a great way to show off community activity, whether you’re a newbie or a Champion.

Learn more
Community showcase
Published Thursday in Jira

5 ways you can make the most of Jira Software and Bitbucket Cloud

As part of the Bitbucket product team I'm always interested in better understanding what kind of impact the use of our tools have on the way you work. In a recent study we conducted of software devel...

102 views 0 5
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you