DEV Community

Cover image for Securing access to Google Service Accounts from Gitlab CI
Chabane R. for Stack Labs

Posted on • Edited on

Securing access to Google Service Accounts from Gitlab CI

How many service account keys are stored per day as variables in the Gitlab CI configuration?

When a Google Service Accounts Key is saved in Gitlab, we face all the security issues of storing credentials outside of the cloud infrastructure: Access, authorization, key rotation, age, destruction, location, etc.

There are 2 common reasons for developers to store GCP credentials in Gitlab CI:

  • They use shared runners.
  • They use specific runners deployed in a Google Kubernetes Engine cluster but do not use (or do not know about) Workload Identity add-ons.

You can continue to use GSA keys in Gitlab CI and secure the keys with external tools like Vault and Forseti, but this will add additional tools to manage.

The alternative that Google Cloud proposes for customers is to enable Workload Identity add-ons.

The workload identity add-on provided in Google Kubernetes Engine allows you to bind the Kubernetes Service Account associated with the specific runner to the Google service account.

Note: At the time of writing this post, when you enable Workload identity you will not be able to use some GKE add-ons like Istio, Config Connector or Application Manager on default nodepool because they depend on Compute Engine metadata server and Workload identity uses GKE metadata server.

For this reason, I often recommend having a dedicated GKE cluster for Gitlab runners to avoid any errors for your business workload.

Working with Workload Identity

The first step is to create and configure our GKE devops cluster.

  • We start by creating our GKE cluster [1]:


gcloud projects create mycompany-core-devops
gcloud config set project mycompany-core-devops
gcloud services enable containerregistry.googleapis.com
gcloud container clusters create devops \
  --workload-pool=mycompany-core-devops.svc.id.goog


Enter fullscreen mode Exit fullscreen mode

Let's create a nodepool for the runner jobs:



gcloud container node-pools create gitlab-runner-jobs-dev \
  --cluster=devops \  
  --node-taints=gitlab-runner-jobs-dev-reserved=true:NoSchedule \  
  --node-labels=nodepool=dev \  
  --min-nodes=0 --max-nodes=3


Enter fullscreen mode Exit fullscreen mode
  • Configure kubectl to communicate with the cluster:


gcloud container clusters get-credentials devops


Enter fullscreen mode Exit fullscreen mode
  • Create the namespace to use for the Kubernetes service account.


kubectl create namespace dev


Enter fullscreen mode Exit fullscreen mode
  • Create the Kubernetes service account to use for specific runner:


kubectl create serviceaccount --namespace dev app-deployer


Enter fullscreen mode Exit fullscreen mode
  • Create a Google service account for the specific runner


gcloud projects create mycompany-core-security
gcloud config set project mycompany-core-security
gcloud iam service-accounts create app-dev-deployer


Enter fullscreen mode Exit fullscreen mode

Note: For easier visibility and auditing, I recommend to centrally create service accounts in dedicated projects.

  • Allow the Kubernetes service account to impersonate the Google service account by creating an IAM policy binding between the two. This binding allows the Kubernetes Service account to act as the Google service account.


gcloud iam service-accounts add-iam-policy-binding \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:mycompany-core-devops.svc.id.goog[dev/app-deployer]" \
  app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com


Enter fullscreen mode Exit fullscreen mode
  • Add the iam.gke.io/gcp-service-account=app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com annotation to the Kubernetes service account, using the email address of the Google service account.


kubectl annotate serviceaccount \
  --namespace dev \
  app-deployer \
  iam.gke.io/gcp-service-account=app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com


Enter fullscreen mode Exit fullscreen mode

Alt Text

Assign KSA to Gitlab runner

The next step is to assign the KSA to our Gitlab runner.

  • Start by installing Helm:


curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh


Enter fullscreen mode Exit fullscreen mode
  • Add Gitlab Helm package:


helm repo add gitlab https://charts.gitlab.io


Enter fullscreen mode Exit fullscreen mode
  • Configure the runner:

Create the file values.yaml:



imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "<>"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
  create: true
metrics:
  enabled: true
runners:
  image: ubuntu:18.04
  locked: true
  pollTimeout: 360
  protected: true
  serviceAccountName: app-deployer
  privileged: false
  namespace: dev
  builds:
    cpuRequests: 100m
    memoryRequests: 128Mi
  services:
    cpuRequests: 100m
    memoryRequests: 128Mi
  helpers:
    cpuRequests: 100m
    memoryRequests: 128Mi
  tags: "k8s-dev-runner"
  nodeSelector: 
    nodepool: dev
  nodeTolerations:
    - key: "gitlab-runner-jobs-dev-reserved"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"


Enter fullscreen mode Exit fullscreen mode

You can find the description of each attribute in the Gitlab runner charts repository [2]

  • Get the Gitlab registration token from Project -> Settings -> CI/CD -> Runners in the Setup a specific Runner manually section.

  • Install the runner:



helm install -n dev app-dev-runner -f values.yaml gitlab/gitlab-runner


Enter fullscreen mode Exit fullscreen mode

Alt Text

Using the specific runner in Gitlab CI

Before running our first pipeline in Gitlab CI, let's create a new business project and add the Kubernetes cluster administrator permission to the GSA we created earlier.



gcloud projects create mycompany-business-dev
gcloud config set project mycompany-business-dev
gcloud projects add-iam-policy-binding mycompany-business-dev \
  --role roles/container.clusterAdmin \
  --member "serviceAccount:app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com"


Enter fullscreen mode Exit fullscreen mode

Now we can run our pipeline .gitlab-ci.yml:



stages:
  - dev

infra:
  stage: dev
  image: 
    name: google/cloud-sdk
  script: 
    - gcloud config set project mycompany-business-dev
    - gcloud services enable containerregistry.googleapis.com
    - gcloud container clusters create business
  tags:
    - k8s-dev-runner


Enter fullscreen mode Exit fullscreen mode

The job will create a GKE cluster in the mycompany-business-dev project. We can follow the same steps for a prod environment.

Alt Text

Go further

We can go further and only allow our GSA to create Kubernetes manifests in a specific namespace of our business cluster.

Create a file rbac-dev.yaml



kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: app
  name: devops-app
rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "secrets"]
  verbs: ["get", "list", "watch", "create", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: devops-app-binding
  namespace: app
subjects:
- kind: User
  name: app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com
roleRef:
  kind: Role
  name: devops-app
  apiGroup: rbac.authorization.k8s.io


Enter fullscreen mode Exit fullscreen mode

Create the rbac



gcloud config set project mycompany-business-dev
gcloud container clusters get-credentials business
kubectl create namespace app
kubectl apply -f rbac-dev.yaml


Enter fullscreen mode Exit fullscreen mode

And don't forget to assign permissions to create Kubernetes resources:



gcloud projects add-iam-policy-binding mycompany-business-dev \
  --role roles/container.developer \
  --member "serviceAccount:app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com"


Enter fullscreen mode Exit fullscreen mode

Let's create a new pod in the business cluster:



manifests:
  stage: dev
  image: 
    name: google/cloud-sdk
  script: 
    - gcloud config set project mycompany-business-dev
    - gcloud container clusters get-credentials business
    - kubectl run nginx --image=nginx -n app
  tags:
    - k8s-dev-runner


Enter fullscreen mode Exit fullscreen mode

If you try to create the nginx pod in the default namespace, it will fail with an unauthorized access error.

Alt Text

Conclusion

In this post, we created a devops cluster, we centralized our GSA in a specific GCP project, and we ended up deploying our GCP and Kubernetes resources in a business project.

This mechanism guarantees end-to-end security for your GSA resources. You can easily create a cron job that disables the GSA in the evening and re-enable them in the morning of a working day.

If you have any questions or feedback, please feel free to leave a comment.

Otherwise, I hope I've convinced you to remove your GSA keys from Gitlab CI variables and use specific runners in a GKE with Workload Identity enabled.

By the way, do not hesitate to share with peers 😊

Thanks for reading!

Documentation

[1] https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to
[2] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/main/values.yaml

Top comments (2)

Collapse
 
timschwalbe profile image
Tim Schwalbe • Edited

Hey! Great article!

I just implemented it the same way.

I have a few questions regarding ->"gcloud container clusters get-credentials business"
How long these credentials are valid?
Could they be stolen and used for a long period or are these short-lived tokens as GCP knows the call comes from an Cloud Identity Account?

Is this the only way to auth kubectl?

Thanks a lot!

Collapse
 
chabane profile image
Chabane R.

Hi Tim!

Thanks for your contribution!

The credentials will live as long as the gitlab runner job is up so just after the completion of the stage.

For a Kubernetes cluster shared between different teams or departments, I would recommend using Kubernetes RBAC or Kubernetes Agents (Premium tiers). It could help to respect least privilege principles.