Just a heads up: On March 24, 2025, starting at 4:30pm CDT / 19:30 UTC, the site will be undergoing scheduled maintenance for a few hours. During this time, the site might be unavailable for a short while. Thanks for your patience.
×The below warning shows up quite frequently in Confluence, and I saw from other questions that this can easily be solved by setting limits.conf. The challenge with this however, is that the Linux distribution that comes with my Linux device does not have such a file. Regardless though, I can normally query user limits using "ulimit -a" , so I'm wondering if there's another fix I can apply to help Confluence determine the configured process limit ?
2020-10-17 19:34:11,282 WARN [HealthCheck:thread-2] [troubleshooting.stp.spi.DefaultFileSystemInfo] lambda$getThreadLimit$0 Failed to determine the configured process limit
java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
at java.lang.Integer.parseInt(Integer.java:615)
at com.atlassian.troubleshooting.stp.spi.DefaultFileSystemInfo.lambda$getThreadLimit$0(DefaultFileSystemInfo.java:52)
at java.util.Optional.map(Optional.java:215)
at com.atlassian.troubleshooting.stp.spi.DefaultFileSystemInfo.getThreadLimit(DefaultFileSystemInfo.java:49)
at com.atlassian.troubleshooting.healthcheck.checks.ThreadLimitHealthCheckCondition.shouldDisplay(ThreadLimitHealthCheckCondition.java:32)
at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)
at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1359)
at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)
at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)
at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454)
at com.atlassian.troubleshooting.healthcheck.DefaultSupportHealthCheckSupplier.shouldDisplay(DefaultSupportHealthCheckSupplier.java:50)
at com.atlassian.troubleshooting.healthcheck.DefaultSupportHealthCheckSupplier.asPluginSuppliedSupportHealthCheck(DefaultSupportHealthCheckSupplier.java:126)
at com.atlassian.troubleshooting.healthcheck.DefaultSupportHealthCheckSupplier.lambda$healthChecksFrom$2(DefaultSupportHealthCheckSupplier.java:120)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at com.atlassian.troubleshooting.healthcheck.DefaultSupportHealthCheckSupplier.healthChecksFrom(DefaultSupportHealthCheckSupplier.java:122)
at com.atlassian.troubleshooting.healthcheck.DefaultSupportHealthCheckSupplier.byInstance(DefaultSupportHealthCheckSupplier.java:87)
at com.atlassian.troubleshooting.healthcheck.SupportHealthStatusBuilder.getHelpPathUrl(SupportHealthStatusBuilder.java:109)
at com.atlassian.troubleshooting.healthcheck.SupportHealthStatusBuilder.buildStatus(SupportHealthStatusBuilder.java:134)
at com.atlassian.troubleshooting.healthcheck.SupportHealthStatusBuilder.ok(SupportHealthStatusBuilder.java:62)
at com.atlassian.troubleshooting.healthcheck.checks.FontHealthCheck.check(FontHealthCheck.java:29)
at com.atlassian.troubleshooting.healthcheck.impl.PluginSuppliedSupportHealthCheck.check(PluginSuppliedSupportHealthCheck.java:49)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Hello @Jaap !
The limitation you presented “my Linux device does not have such a file” is quite interesting. Usually the limits.conf file is located under /etc/security/.
Since you are still able to use ulimit, you could try to temporarily set the limits with ulimit. As described in our documents:
Resolution
We recommend setting the maximum running user processes and number of opened files permanently, the way you do this is operating system specific. In most Linux distributions this can be set temporarily using ulimit or permanently using limits.conf (e.g. Ubuntu, RedHat) by adding the following:
You can check the whole document and also the Man page for ulimit here:
Further, would you mind sharing what distribution of Linux are you using? This could help us to understand why the file or manual adjustment for the limits is not present in your system.
Let us hear from you!
Hi Diego,
I'm running a Confluence lab setup on a NAS device (embedded linux), which uses busybox builtin commands by default.
System-wide looking for limits.conf (excluding docker plugin):
$ updatedb && locate limits.conf | grep -v docker_lib | wc -l
0
Similarly searching for a security folder yields the following folders:
/sys/kernel/security (empty folder)
/usr/lib/security (only has .so libraries)
Even though, the system still outputs these limits:
$ ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) 0
-m: resident set size (kb) unlimited
-l: locked memory (kb) 64
-p: processes 31300
-n: file descriptors 1024
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
Interpreting the output, it seems the system has numerical limits in place for the limits described in Health Check: Thread Limit by default. Though not by the same amount.
-p: processes 31300 -> more then the advised amount of 8192
-n: file descriptors 1024 -> less then the advised amount of 8192
The exception however seem to talk about a NumberFormatException, which (I guess) could have to do with "unlimited". However seeing that actual numbers are in place, I feel it's more likely that Confluence is unable to read the limits altogether.
Any advise on what I can do to work around this is appreciated ?
Kind regards, Jaap
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.