If start bitbucket server in docker with image atlassian/bitbucket-server:7.8.0-ubuntu-jdk11
then perform a restart - get error:
CRITICAL:root:Could not create log directory. The Bitbucket webapp was not started.
The problem is that someone screwed up the entrypoint.py's create_log_dir() function.
If the log dir does not yet exist it will properly return True if it can be created and False if not.
Unfortunately it will not return True if the log dir already exists, causing the start_bitbucket() function to fail on the "if not create_log_dir()" and not starting Bitbucket since returning nothing will ultimately evaluate to false on the check.
The temporary solution is to delete the log directory which on the next start will cause the container to go into the non-broken branch of the create_log_dir() function and behave as expected. Unfortunately this only works once and on pod restart will fail again.
The solution would be to change the entrypoint.py and add a "return True" at the end of the create_log_dir() function.
@Matthias Kannenberg well spotted. We found the same, and issued a fix quickly (and put the necessary controls in place to prevent such things getting merged into main in future) however thanks to an oversight by yours truly, our initial rollback of those changes was not entirely successful, so a number of image tags remained on the old version. This has since been rectified and shouldn't be an issue going forward.
Let me know if you see any other issues!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey, first of all let me thank you! Your docker images on your private account were a great help over the years and I've been using them long before Atlassian published official ones (and still relying on your Bamboo-server one because I have not found a way to do lifecyclePostStart cmds as root for installing custom tooling / OS packages with the official image). Absolutely great work!
Thanks for fixing this issue as well!
If you are open to it, here are a few other issues I found while debugging the original issue:
1) JMX_REMOTE_AUTH is only checked for None, but not for "" when determining if it should be used
2) JVM_MINIMUM_MEMORY and JVM_MEMORY_MAX are also not formally validated and having "" as values causes the env retrieval to not use the internal default values. This will cause the Xms and Xmx parameters to be set without values and the bitbucket start command to fail
Both of these things were not an issue previously and I used to have these things set as "" in my helm charts, so something must have changed in the last half a year or so. I now updated my helm chart to explicitly check for "" on my side and make sure that I will not set the env variables, but I feel like this could be more robust for other users sake.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Matthias Kannenberg Thank you so much for the feedback! I started building those images when I was working as a Premier Support Engineer, and even now in my not-so-technical people leadership role I like to scratch that itch from time to time and play with Docker :)
For now we've reverted the entrypoint rewrite so you won't run into those issues, but we plan to reintroduce those changes so I appreciate the observations and I'll pass them on. Our intention is to overcome an issue where our entrypoint would call the built in start-bitbucket.sh script to start Bitbucket (+ Elasticsearch if enabled), and this started the JVM process in such a way that we couldn't send a clean shutdown signal, meaning stopping the image would forcefully terminate the JVM. That would've bitten us in the ass at some point, so the idea is to have our entrypoint handle launching the JVM(s) directly. For now we've reverted back to launching via start-bitbucket.sh but when we redo this work I'll make sure we take your notes into account.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Seeing the same error message with
atlassian/bitbucket-server:6.10.7
Had a working copy from the 24th, tried to pull the new copy today and getting the error now. The last update on that version shows it was "a day ago"
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Nicholas Bartlett this should be fixed now. My apologies - our rebuild pipeline takes a long time to run as we compile git from source in the build in order to ship each image the highest version of git which that Bitbucket version supports. Rebuilding every 6.x and 7.x point release takes a few hours and thanks to an oversight by yours truly, we were killing the build partway through.
We've now fixed the root cause of the failure and rebuilt & republished all 6.x and 7.x images, so if you pull atlassian/bitbucket-server:6.10.7 again you should find it working now. Please let me know if you run into any other issues.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Also happens with image for Bitbucket Server 7.4.0.
I think it is an error in the image itself.
I deployed Bitbucket Server 7.4.0 image on 11/Nov without any issues (through Docker Swarm).
Today I had to remove the stack and deploy a new one (no change to the docker-compose-yml, so I would expect that the same image is instanced).
However, deploying the new Swarm stack probably triggered a download of a new image from Docker Hub. Now the container won't start and is showing the same error as mentioned by the topic owner.
Running `docker images -a` on my machine I see:
REPOSITORY TAG IMAGE ID CREATED SIZE
atlassian/bitbucket-server <none> 4b66f976ab5d 21 hours ago 1.02GB
On Docker Hub I also see that the image with tag 7.4.0 was updated 21 hours ago.
https://hub.docker.com/layers/atlassian/bitbucket-server/7.4.0/images/sha256-c2f6eac1462a9ad71da16c03453b6d04af94506d19621c3306f64e551e118921?context=explore
Therefore, I presume that there is something inherently wrong in Atlassian's deployment pipeline causing all images to fail.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Stefan Derungs this should be fixed now. A combination of an error that snuck into the build during some maintenance work, and a failure to properly rollback 100% of our published images (we update all 6.x and 7.x releases with improvements) caused a number of tags to remain stuck on the broken version of the image.
This has since been fixed and we've put the necessary controls in place to prevent it occurring in future. A bit of human error thanks to yours truly, but if you pull your tag again you should find it working correctly now.
Let me know if you see any issues!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Great, thanks! The container is starting up again as it should :)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Since I was using an external database & environment variables I commented out the volume from my config file and its working, but this solution is only for people that do not need the mounted volume so beware.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Ignore this tip, you will get 404s on repository code
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Also having this problem, I automatically use newer images daily and the overnight update has failed with this. No further information in the logs either.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Was able to resolve it by renaming the `log` folder to something else in the `bitbucket` folder of the mounted data. Then recreated my docker-compose stack though that might not be required and a simple docker restart should be enough.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
happens with 7.7.1 also
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.