Healdine on a green and blue background reads

From EC2 to EKS: Optimising GitLab Runner with Karpenter Autoscaling

Written by Rachit, justDiceā€™s Senior DevOps Engineer

In this post, we’ll guide you through our journey of migrating GitLab runners from static deployments on Amazon EC2 instances to a dynamic and cost-efficient setup on Amazon EKS (Elastic Kubernetes Service) with Karpenter for node autoscaling. Just picture it, our old setup was like a buffet with everything laid out, even when you only wanted one slice of cake. We were paying for the whole spread, even the leftovers nobody touched!

Old Setup

Our initial setup consisted of a central GitLab server on an EC2 instance. We utilised separate EC2 machines for GitLab runners, categorised by the resources they need:

  1. gitlab-runner-low: 60 x c5n.large – instances for low-resource jobs 
  2. gitlab-runner-mid: 3 x c5.xlarge – dedicated instances for mid-resource jobs
  3. gitlab-runner-high: 6 x c5n.2xlarge – dedicated instances for high-resource jobs

The Problem

These runners ran continuously, regardless of actual workload, leading to unnecessary resource consumption and increased costs. It was like watering a desert with no plants to drink it!

The Solution

To address this inefficiency, we migrated to EKS and adopted an on-demand approach for GitLab runners. This involves spinning up runners only when needed, improving resource utilisation and minimising costs. Think of it as a self-serve ice cream bar – you only pay for what you scoop!


  1. Deploying the EKS Cluster:
  2. Deploying the GitLab Runner Helm Chart:

Introducing Karpenter

While the above steps addressed the on-demand runner requirement, a new challenge arose: the inherent slowness of the AWS node autoscaler. This delay in provisioning new nodes could impact the time it takes to start GitLab runners when resource demands spike. 

Fortunately, we found a solution in Karpenter!

Karpenter to the Rescue

Karpenter is an advanced Kubernetes cluster autoscaler that excels in managing node resources. It automatically scales the cluster up or down based on workload requirements, ensuring efficient resource utilisation and cost savings. Additionally, it boasts significantly faster scaling compared to the default AWS autoscaler. 

To learn how to deploy Karpenter, refer to the official documentation (https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/). Notably, eksctl also supports Karpenter deployment: https://eksctl.io/usage/eksctl-karpenter/


By transitioning to EKS and leveraging Karpenter, we successfully achieved an on-demand setup for GitLab runners, optimising resource utilisation and minimising costs. This not only reduces our cloud expenses but also enhances the overall efficiency of our CI/CD pipeline. Who knew managing GitLab runners could be so sweet (and cost-effective)?

This blog serves as a high-level overview of our migration process. If you have further questions or require more specific details, feel free to reach out to Rachit on LinkedIn.