Hi,
I'm building a docker image for a java app, so I use maven container for that.
Now, I want to push the image to ECR. ECR has very strict security so you have to loging with awscli every time you need yo push something (token is valid for 12h only)
To login you need to run something like "$(aws ecr get-login --no-include-email)" and provide AWS key and secret as an environemt variables.
The problem: my maven image (from docker hub) where the app is built does not have any python nor pip installed.
To merge python+pip+maven into a single docker image is a bit of a hassle (and maintance of course).
Maybe there is simpler way to login docker to ECR by some Bitbucket pluging? or maybe I can have a "pre build" step where I can get login string from awscli?
thanks a lot in advance for ideas!
And there is a solution!
Steps feature with ability to define an image on the step level is a perfect fit for such kind of tasks.
so I defined 2 steps pipeline:
- step:
name: Build Login-to-ECR script
image: python:3.6-alpine
script:
- pip install awscli
- echo $(aws ecr get-login --no-include-email --region eu-west-1) > login.sh
artifacts:
- login.sh
caches:
- pip
- step:
name: Build the app
image: node:6-alpine
script:
- sh login.sh
First step runs in a python container with aws cli installed. It executes get login sting command, and save it to a file. The file is shared with further steps via artifact notation.
So the second step, who supposed to build and upload docker image to the registry, just need to execute the login command.
Looks really nice, as I don't need to build and maintain custom images and use basic python and java (or any other) images instead.
PS: credentials for aws cli are provided via pipeline environment variables
Thank you for this!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Another big thanks from me as well, quite a handy snippet to share!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Oleg Sigida May I ask how did you share the AWS auth credentials with the Python container?
I'm wanted to define it as a service so you can define ENV vars, but it seems that we can't use Secure Bitbucket ENV vars in the "services" section for some weird reason, so I'm left with the container inception option where we'll manually
```
docker run -e MY_AWS_ID=$MY_BITBUCKET_ENV_VAR (etc) python:3.6-alpine pip install awscli && echo ...
```
Just realised it SHOULD work out of the box just having the ENV vars set in bitbucket available in all containers, but I may have done something wrong. I'll add a few test steps to echo bitbucket defined ENV vars inside different containers. Still curious how you did it (probably simple bitbucket ENV vars as well)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi!
I use environment variables:
AWS_ACCESS_KEY_ID
– AWS access key.
AWS_SECRET_ACCESS_KEY
– AWS secret key. Access and secret key variables override credentials stored in credential and config files.
see: https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html
You can define them per project, in Repo -> Settings -> Pipelines -> Environment variables
or you can define them globally on the organisation level, there is a pipeline section as well (be careful - all repositories will get access to variables you define there)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey Oleg,
Thanks for the super quick reply. I edited almost immediately after posting, realising it was probably due to my own mistake that the Bitbucket Pipeline ENV Vars were not working properly. Seems that you cannot make AWS_ACCESS_KEY_ID a "secure" env var, at least not in the image I'm using. We were just getting a:
```
An error occurred (IncompleteSignatureException) when calling the GetAuthorizationToken operation: Credential must have exactly 5 slash-delimited elements, e.g. keyid/date/region/service/term, got '$AWS_ACCESS_KEY_ID/20180426/eu-central-1/ecr/aws4_request
```
Even though I'd prefer to keep it invisible, I don't really mind for the ID. Testing as we speak.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
KEY_ID is not secured for me (kinda useful to see which pair is actually used)
ACCESS_KEY is a secured variable
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I tried and didn't get it to work:
+ echo $(aws ecr get-login --no-include-email --region eu-central-1) > dockerlogin.sh
An error occurred (InvalidSignatureException) when calling the GetAuthorizationToken operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
Think it's something else (I can manually auth myself, the credentials' values are not the problem for sure).
Doing one last attempt with both unsecured Vars, but I don't think that will be the prob! Any ideas @Oleg Sigida
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Interesting.. It worked with both Unsecure. Can't have it working if either is Secured. Do I need to encode or encrypt it? Can't seem to find any requirement in the docs.. Thanks in advance!
Edit: Well, a new ID + Secret pair did the trick. As you can't ever view a secured env var, I can't be sure - but it seems the only problem all along was someone in here failed to copy paste a string properly into the bitbucket vars config. And I failed to check it before going into debugging and proposing complex alternatives :')
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I didn't have issues like that when I was setting up CI.
It is good to have a dedicated key/secret with limited access rights anyway.
Great that you sort it out!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I tried to run it, but got exception regarding the profile. I set the AWS_ACCESS_KEY_ID and AWS_SECRET_ECS_ACCESS_KEY in the Settings>Repository Variables.
"ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (default) could not be found
",
Any idea? Thanks
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@jfordec H looks like something went wrong with the key and secret.
Double check by acho-ing env variables.
You can also debug it by launching python:3.6-alpine container with defined env variables and executing this commands one by one manually. It will be faster then using BB-pipeline.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Oleg SigidaThanks for your quick response. It's my bad, there is a AWS_Profile = default in my variable which caused the exception. After removing it, then add the build, tag and I can push to ECR successfully. The following is the complete step.
.....
- step:
name: Build the app
image: node:6-alpine
script:
- sh login.sh
- docker build -t name .
- docker tag ****
- docker push ****
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.