Group-level Kubernetes clusters

Introduced in GitLab 11.6.

Overview

Similar to project-level and instance-level Kubernetes clusters, group-level Kubernetes clusters allow you to connect a Kubernetes cluster to your group, enabling you to use the same cluster across multiple projects.

Installing applications

GitLab can install and manage some applications in your group-level cluster. For more information on installing, upgrading, uninstalling, and troubleshooting applications for your group cluster, see Gitlab Managed Apps.

RBAC compatibility

For each project under a group with a Kubernetes cluster, GitLab will create a restricted service account with edit privileges in the project namespace.

NOTE: Note: RBAC support was introduced in GitLab 11.4, and Project namespace restriction was introduced in GitLab 11.5.

Cluster precedence

GitLab will use the project's cluster before using any cluster belonging to the group containing the project if the project's cluster is available and not disabled.

In the case of sub-groups, GitLab will use the cluster of the closest ancestor group to the project, provided the cluster is not disabled.

Multiple Kubernetes clusters (PREMIUM)

With GitLab Premium, you can associate more than one Kubernetes clusters to your group. That way you can have different clusters for different environments, like dev, staging, production, etc.

Add another cluster similar to the first one and make sure to set an environment scope that will differentiate the new cluster from the rest.

GitLab-managed clusters

Introduced in GitLab 11.5. Became optional in GitLab 11.11.

You can choose to allow GitLab to manage your cluster for you. If your cluster is managed by GitLab, resources for your projects will be automatically created. See the Access controls section for details on which resources will be created.

If you choose to manage your own cluster, project-specific resources will not be created automatically. If you are using Auto DevOps, you will need to explicitly provide the KUBE_NAMESPACE deployment variable that will be used by your deployment jobs.

NOTE: Note: If you install applications on your cluster, GitLab will create the resources required to run these even if you have chosen to manage your own cluster.

Base domain

Introduced in GitLab 11.8.

Domains at the cluster level permit support for multiple domains per multiple Kubernetes clusters. When specifying a domain, this will be automatically set as an environment variable (KUBE_INGRESS_BASE_DOMAIN) during the Auto DevOps stages.

The domain should have a wildcard DNS configured to the Ingress IP address.

Environment scopes (PREMIUM)

When adding more than one Kubernetes cluster to your project, you need to differentiate them with an environment scope. The environment scope associates clusters with environments similar to how the environment-specific variables work.

While evaluating which environment matches the environment scope of a cluster, cluster precedence will take effect. The cluster at the project level will take precedence, followed by the closest ancestor group, followed by that groups' parent and so on.

For example, let's say we have the following Kubernetes clusters:

Cluster Environment scope Where
Project * Project
Staging staging/* Project
Production production/* Project
Test test Group
Development * Group

And the following environments are set in .gitlab-ci.yml:

stages:
- test
- deploy

test:
  stage: test
  script: sh test

deploy to staging:
  stage: deploy
  script: make deploy
  environment:
    name: staging/$CI_COMMIT_REF_NAME
    url: https://staging.example.com/

deploy to production:
  stage: deploy
  script: make deploy
  environment:
    name: production/$CI_COMMIT_REF_NAME
    url: https://example.com/

The result will then be:

  • The Project cluster will be used for the test job.
  • The Staging cluster will be used for the deploy to staging job.
  • The Production cluster will be used for the deploy to production job.

Security of Runners

For important information about securely configuring GitLab Runners, see Security of Runners documentation for project-level clusters.