DEV Community

Cover image for Securing access to AWS IAM Roles from Gitlab CI
Chabane R. for Stack Labs

Posted on • Edited on

Securing access to AWS IAM Roles from Gitlab CI

How many access and secret keys are stored per day as variables in the Gitlab CI configuration?

When an AWS access key is saved in Gitlab, we face all the security issues of storing credentials outside of the cloud infrastructure: Access, authorization, key rotation, age, destruction, location, etc.

There are 2 common reasons for developers to store AWS credentials in Gitlab CI:

  • They use shared runners.
  • They use specific runners deployed in a Amazon Elastic Kubernetes Service cluster but do not use (or do not know about) IAM Roles for Service Accounts feature.

You can continue to use AWS Access keys in Gitlab CI and secure the keys with external tools like Vault and Forseti, but this will add additional tools to manage.

The alternative that AWS proposes for customers is to use IAM Roles for Service Accounts (IRSA) feature.

IRSA feature allows you to bind the Kubernetes Service Account associated with the specific runner to an AWS IAM Role.

Working with IRSA

The first step is to create and configure our EKS devops cluster.

  • We start by creating our EKS cluster [1] using eksctl:


export AWS_PROFILE=<AWS_PROFILE>
export AWS_REGION=eu-west-1
export EKS_CLUSTER_NAME=devops
export EKS_VERSION=1.19

eksctl create cluster \
 --name $EKS_CLUSTER_NAME \
 --version $EKS_VERSION \
 --region $AWS_REGION \
 --managed \
 --node-labels "nodepool=dev"


Enter fullscreen mode Exit fullscreen mode
  • Create an IAM OIDC identity provider for the cluster


eksctl utils associate-iam-oidc-provider --cluster=$EKS_CLUSTER_NAME --approve

ISSUER_URL=$(aws eks describe-cluster \
                       --name $EKS_CLUSTER_NAME \
                       --query cluster.identity.oidc.issuer \
                       --output text)


Enter fullscreen mode Exit fullscreen mode
  • Configure kubectl to communicate with the cluster:


aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME


Enter fullscreen mode Exit fullscreen mode
  • Create the namespace to use for the Kubernetes service account.


kubectl create namespace dev


Enter fullscreen mode Exit fullscreen mode
  • Create the Kubernetes service account to use for specific runner:


kubectl create serviceaccount --namespace dev app-deployer


Enter fullscreen mode Exit fullscreen mode
  • Allow the Kubernetes service account to impersonate the IAM Role by using an IAM policy binding between the two. This binding allows the Kubernetes Service account to act as the IAM Role.


ISSUER_HOSTPATH=$(echo $ISSUER_URL | cut -f 3- -d'/')

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

PROVIDER_ARN="arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/$ISSUER_HOSTPATH"

cat > trust-policy.json << EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "$PROVIDER_ARN"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${ISSUER_HOSTPATH}:sub": "system:serviceaccount:dev:app-deployer",
          "${ISSUER_HOSTPATH}:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}
EOF

ROLE_NAME=eks-cluster-role

aws iam create-role \
          --role-name $ROLE_NAME  \
          --assume-role-policy-document file://trust-policy.json

EKS_ROLE_ARN=$(aws iam get-role \
                        --role-name $ROLE_NAME \
                        --query Role.Arn --output text)


Enter fullscreen mode Exit fullscreen mode
  • Add the eks.amazonaws.com/role-arn=$EKS_ROLE_ARN annotation to the Kubernetes service account, using the IAM role ARN.


kubectl annotate serviceAccount app-deployer -n dev eks.amazonaws.com/role-arn=$EKS_ROLE_ARN


Enter fullscreen mode Exit fullscreen mode

You could also use eksctl create iamserviceaccount [..] [2]

Alt Text

Assign KSA to Gitlab runner

The next step is to assign the KSA to our Gitlab runner.

  • Start by installing Helm:


curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh


Enter fullscreen mode Exit fullscreen mode
  • Add Gitlab Helm package:


helm repo add gitlab https://charts.gitlab.io


Enter fullscreen mode Exit fullscreen mode
  • Configure the runner:

Create the file values.yaml:



imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "<REGISTRATION_TOKEN>"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
  create: true
metrics:
  enabled: true
runners:
  image: ubuntu:18.04
  locked: true
  pollTimeout: 360
  protected: true
  serviceAccountName: app-deployer
  privileged: false
  namespace: dev
  builds:
    cpuRequests: 100m
    memoryRequests: 128Mi
  services:
    cpuRequests: 100m
    memoryRequests: 128Mi
  helpers:
    cpuRequests: 100m
    memoryRequests: 128Mi
  tags: "k8s-dev-runner"
  nodeSelector: 
    nodepool: dev


Enter fullscreen mode Exit fullscreen mode

You can find the description of each attribute in the Gitlab runner charts repository [3]

  • Get the Gitlab registration token in Project -> Settings -> CI/CD -> Runners in the Setup a specific Runner manually section.

  • Install the runner:



helm install -n dev app-dev-runner -f values.yaml gitlab/gitlab-runner


Enter fullscreen mode Exit fullscreen mode

Alt Text

Using the specific runner in Gitlab CI

Before running our first pipeline in Gitlab CI, let's add the Kubernetes cluster administrator permission to the IAM role we created earlier.



cat > eks-admin-policy.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "eksadministrator",
            "Effect": "Allow",
            "Action": [
                 "eks:*",
                 "cloudformation:*",
                 "ec2:*",
                 "ssm:*",
                 "iam:*"

            ],
            "Resource": "*"
        }
    ]
}
EOF

aws iam  put-role-policy \
--role-name $ROLE_NAME \
--policy-name eks-admin-policy \
--policy-document file://eks-admin-policy.json


Enter fullscreen mode Exit fullscreen mode

Note: This policy is used as an example. AWS recommends you to use fine grained permissions.

Now we can run our pipeline .gitlab-ci.yaml:



stages:
  - dev

before_script:
    - yum install -y tar gzip
    - curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    - mv /tmp/eksctl /usr/local/bin
infra:
  stage: dev
  image: 
    name: amazon/aws-cli
  script: 
    - eksctl create cluster --name=business --region=eu-west-1 --managed --instance-types t3.medium
  tags:
    - k8s-dev-runner


Enter fullscreen mode Exit fullscreen mode

The job will create an EKS cluster in the AWS Account using eksctl. We can follow the same steps for a prod environment.

Alt Text

Conclusion

This mechanism guarantees end-to-end security for your IAM role resources. You can easily create a cron job that delete the IAM roles in the evening and re-create them in the morning of a working day.

If you have any questions or feedback, please feel free to leave a comment.

Otherwise, I hope I've convinced you to remove your Access & secret keys from Gitlab CI variables and use specific runners in a EKS with IRSA configured.

By the way, do not hesitate to share with peers 😊

Thanks for reading!

Documentation

[1] https://aws.amazon.com/fr/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/
[2] https://eksctl.io/usage/iamserviceaccounts
[3] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/main/values.yaml

Top comments (1)

Collapse
 
arit20y profile image
arit20y

Suppose I have created a service account for ECR access. Now how I can generate something below from aws cli:
[profile ecraccess]
role_arn = arn:aws:iam::251297227856:role
source_profile = default
also i need to configure this in .gitlab-ci.yml.

aws ecr get-login-password --region eu-west-1 --profile ecraccess