Just a heads up: On March 24, 2025, starting at 4:30pm CDT / 19:30 UTC, the site will be undergoing scheduled maintenance for a few hours. During this time, the site might be unavailable for a short while. Thanks for your patience.
×Does anyone have experience with *succesfully* running Jira behind an nginx reverse proxy, using nginx's proxy_cache? This should provide at least a moderate boost in performance if configured correctly.
I have just started experimenting with this, although I supsect I will find some problems with my current (very basic) configuration:
proxy_cache_path /var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:50m max_size=50m inactive=1440m; proxy_temp_path /var/run/nginx-cache/tmp; server { server_name jira.example.com; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://127.0.0.1:8080; proxy_cache nginx-cache; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_valid 1440m; proxy_cache_min_uses 1; } }
I have the expire times set very high (24 hours) intentionally to try to experiment with how long I can cache stuff, and to make it easier for me to ascertain what downsides there may be to using the proxy_cache (at least in this configuration).
Mainly I suspect I should add some more config so that it does not try to cache any authentication or admin areas.
I have now updated to the following, it bypasses the cache for at least the top level admin pages.
proxy_cache_path /var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:50m max_size=50m inactive=1440m; proxy_temp_path /var/run/nginx-cache/tmp; server { server_name jira.example.com; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://127.0.0.1:8080; set $do_not_cache 0; if ($request_uri ~* ^(/secure/admin|/plugins|/secure/project)) { set $do_not_cache 1; } proxy_cache nginx-cache; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_bypass $do_not_cache; proxy_cache_valid 1440m; proxy_cache_min_uses 1; } }
If someone else has a better answer I would gladly accept that instead, but the site keeps bugging me to accept an answer so I'll accept my own, at least until a better one comes along... :(
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Sorry to relive this work again, how did it work out? did JIRA perform faster? Do you have a delay in new issues appearing or changing statuses in the dashboard?
Cheers,
John G. (NZ)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I have noticed a boost in performance, it was not significant, but given how easy the nginx proxy cache is to implement (if you're already using nginx, at least) it is well worth it.
I have not noticed any delays with either new issues or statuses. I suspect I am not actually caching as much as I originally thought due to Jira using the Cache-Control: no-cache header - see Sergey's answer below, but without having run any real hard tests I would say it did boost responsiveness.
If I have time at some point I may experiment with making nginx ignore that and see how much caching we can realistically do, but for now I have just been running with the basic configuration above, and there haven't been any problems so far.
For what it's worth, while Jira and Confluence seem to work perfectly well with this proxy_cache configuration, Stash does not like it.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
This did not work for me. For example the main project page stopped working after implementing this. It was raising error that there are too many redirects.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Nginx respects Cache-Control headers sent by the app, so it is safe to enable caching for top directory. Most of the app's responses have Cache-Control: no-cache, anyway -- only javascript files and images are cacheable.
You can also serve static files (f.e. images in atlassian-jira/images/) directly from nginx -- this is easier if it is co-located with the app.
(edited to add:)
Setting proxy_cache_lock on for /s reduces the pain of JRA-37337 (LESS compiler vs. plugin changes issue).
Certain non-cacheable responses are essentially static and could be frequent enough to warrant forced caching. Most of them are still user session-specific, so cache key must be altered to include $cookie_JSESSIONID (at least). Cache them at your own risk.
/osd.jsp -- OpenSearch metadata
/rest/api/2/filter/favourite -- related to JRA-36172. For some reason this response automatically includes a list of subscribers, which could be expensive to generate. However, the filter panel which issues this request has no use for this list.
/rest/menu/latest/appswitcher and /rest/nav-links-analytics-data/1.0 -- used by Application Navigator feature.
/rest/helptips/1.0/tips
/secure/projectavatar and /secure/useravatar
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Brendan,
I know this is an old post but have you made any changes to your nginx proxy_cache setup since your answer in here?
Thanks!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
See trailing slashes in {{proxy_pass}} directives:
server { listen 80; server_name jira.example.com; server_tokens off; root /home/jira/current; merge_slashes on; msie_padding on; location / { proxy_pass http://jira.example.com:8080/; proxy_set_header Host jira.example.com:80; } } server { listen 80; server_name stash.example.com; server_tokens off; root /home/stash/current; merge_slashes on; msie_padding on; location / { proxy_pass http://stash.example.com:7990/; proxy_set_header Host stash.example.com:80; } } server { listen 80; server_name crowd.example.com; server_tokens off; root /home/crowd/current; merge_slashes on; msie_padding on; } server { listen 80; server_name wiki.example.com; server_tokens off; root /home/confluence/current; merge_slashes on; msie_padding on; location / { proxy_pass http://wiki.example.com:8090/; proxy_set_header Host wiki.example.com:80; } } server { listen 80; server_name bamboo.example.com; server_tokens off; root /home/bamboo/current; merge_slashes on; msie_padding on; location / { proxy_pass http://bamboo.example.com:8085/; proxy_set_header Host bamboo.example.com:80; } }
This configuration works for us perfectly.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Your answer does not address my question, which is specifically about using nginx proxy_cache.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Ah ok, my bad. I think I missed the point there :)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi there,
There is a guide to use Atlassian tools with nginx described in this link:
https://mywushublog.com/2012/08/atlassian-tools-and-nginx/
This configuration is quite of simple, please give it a try if you still need this.
Also, please let me know how it goes for you.
Regards,
Celso Yoshioka
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I have not used ngnix, but I did some quick analysis of the proxy cache in apache on Jira 6.0. I did not find it worthwhile because of the our actual usage pattern.
1) Issues assigned to only one to two people.
2) Once a developer found his issue to work on, he got to his issue directly, (ie avoiding the search issue page)
3) Agile boards was only realy being looked at/updated two to three times throughout the day per developer.
4) Updates from outside systems (source control) ran in bursts, so developers wanted to see the actual current status verses something cached.
5) During the scrum meeting there was some actual sharing, but the database cache was more important than the proxy cache.
6) If a qa developer was waiting for a status change, it was actually better to get it through email notification than activity streams.
7) There was some advantage for the dashboard page since the dashboard was tailored for each project.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.