Quantcast
Channel: Active questions tagged config - Stack Overflow
Viewing all articles
Browse latest Browse all 5049

How to apply patch to config.toml file for GitLab runner inside Kubernetes cluster?

$
0
0

I have been struggling for quite some time with the (apparently) quite popular issue in GitLab, where the runner fails to build an image due to

$ docker build --tag $CI_REGISTRY_IMAGE .ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

My pipeline is pretty basic (default one provided by GitLab) without most of the steps (a test step running some tests using make and a build step where an image is created). For the image building it uses Docker-in-Docker (currently looking for alternatives that work with GitLab). The runner was installed and registered using the official GitLab documentation for Kubernetes runners using Helm.

I found the solution in a comment in one of the GitLab issues. I used the Kubernetes Dashboard to log into the pod and change the configuration file, located at /home/gitlab-runner/.gitlab-runner/config.toml as follows:

concurrent = 10check_interval = 30log_level = "info"shutdown_timeout = 0[session_server]  session_timeout = 1800[[runners]]  name = "k8s-remote-gitlab-runner-5857847fcf-sbtvz"  url = "https://XXXXXXXXXXXXXXXXXX"  id = 13208  token = "YYYYYYYYYYYYYY"  token_obtained_at = 2023-04-14T07:54:46Z  token_expires_at = 0001-01-01T00:00:00Z  executor = "kubernetes"  [runners.custom_build_dir]  [runners.cache]    MaxUploadedArchiveSize = 0    [runners.cache.s3]    [runners.cache.gcs]    [runners.cache.azure]  [runners.kubernetes]    host = ""    bearer_token_overwrite_allowed = false    image = "ubuntu:16.04"    namespace = "gitlab-runner"    namespace_overwrite_allowed = ""    privileged = true    node_selector_overwrite_allowed = ""    pod_labels_overwrite_allowed = ""    service_account_overwrite_allowed = ""    pod_annotations_overwrite_allowed = ""    [runners.kubernetes.affinity]    [runners.kubernetes.pod_security_context]    [runners.kubernetes.init_permissions_container_security_context]      [runners.kubernetes.init_permissions_container_security_context.capabilities]    [runners.kubernetes.build_container_security_context]      [runners.kubernetes.build_container_security_context.capabilities]    [runners.kubernetes.helper_container_security_context]      [runners.kubernetes.helper_container_security_context.capabilities]    [runners.kubernetes.service_container_security_context]      [runners.kubernetes.service_container_security_context.capabilities]    [runners.kubernetes.volumes]      [[runners.kubernetes.volumes.host_path]]        name = "var-run-docker-sock"        path = "/var/run/docker.sock"        mount_path = "/var/run/docker.sock"        read_only = false    [runners.kubernetes.dns_config]    [runners.kubernetes.container_lifecycle]

The added part is

[[runners.kubernetes.volumes.host_path]]  name = "var-run-docker-sock"  path = "/var/run/docker.sock"  mount_path = "/var/run/docker.sock"  read_only = false

I am aware this is just a temporary fix to a single pod that might as well be deleted in a couple of hours/days. I would like to know if it's possible to apply a patch to a specific file inside a container in an automated manner. Simply copying a predefined config.toml to the deployed image doesn't work since this will overwrite important runner's parameters such as token (including when it was obtained and when it will expire), ID, name and so on.

Since the configuration also is a default one, perhaps it's better to just change the initial configuration so that every new runner pulls the fixed one instead.


Viewing all articles
Browse latest Browse all 5049

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>