Just a heads up: On March 24, 2025, starting at 4:30pm CDT / 19:30 UTC, the site will be undergoing scheduled maintenance for a few hours. During this time, the site might be unavailable for a short while. Thanks for your patience.
×I would like to configure docker memory limit differently depending on the step I run.
In fact I have a step that perform tests and use a lot of memory using docker-compose, but some other steps that just build the docker image.
For instance one step that use docker with 1GB and another one with 6GB to hold my several services
Is it possible to override docker memory limit for a specific step ?
We have the exact same requirement as @Victor Fleurant . We are running our tests in docker-compose and get "Container 'docker' exceeded memory limit." some times. It's really annoying and breaks the whole CI process.
Just need the "Docker" service in the Test step to have more memory, but there's no way to define that other than adding a memory limit in "definitions", which affects other steps using the "Docker-in-Docker" service.
Any chance of making a feature request for this?
Agree, this is really annoying and not clear.
As a compromise workaround i found that you can redefine (extend) docker service and use it in your step instead. That way your definition for it does not affect basic docker service and you don't have to adjust the size of other steps.
The drawback is that you cannot simply use `caches: -docker` anymore. May be there is a workaround for this also...
Al always, atlassian's documentation is just not detailed enough.
definitions:
services:
docker-6g:
type: docker
memory: 6144
pipelines:
branches:
cd:
- step:
size: 2x
name: build docker image
services:
- docker-6g
max-time: 20
script:
...
- step:
name: update image on test server
script:
- pipe: atlassian/ssh-run:0.4.0
variables:
....
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Would love this too instead of having to use 2x size on all my steps that use docker because I need over 3GiB for docker in one step.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here, I would like to specify docker memory for a specific step, not for all steps
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same question as this.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
From what I remember you can control the memory limit first of all by choosing between two options, the standard one (4GB) and then there is one to double the resources (8GB).
If you want to bring memory limit down (from either of those two), you have to add services. Each service divides the overall limit. E.g. Having three services brings down the limit for the pipeline script itself from 4GB to 1GB.
Going from 8GB to 6GB does not look to be mathematically possible this way, only from 8GB to 4GB (one service) but then you could just run with 4GB as the default.
Running docker rootless inside a pipeline would not add more options here as it is limited on limiting resources:
Currently, rootless mode ignores cgroup-related docker run flags such as --cpus and memory.However, traditional ulimit and cpulimit can be still used, though they work in process-granularity rather than in container-granularity.
E.g. see ulimit and sysctl (www.LinuxHowtos.org) and the administration and configuration guide of your choice.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.