End-to-end Testing

What is end-to-end testing?

End-to-end testing is a strategy used to check whether your application works as expected across the entire software stack and architecture, including integration of all micro-services and components that are supposed to work together.

Branch naming

If your contribution contains only changes under the qa/ folder, you can speed up the CI process by following some branch naming conventions. You have three choices:

Branch name Valid example
Starting with qa/ qa/new-oauth-login-test
Starting with qa- qa-new-oauth-login-test
Ending in -qa 123-new-oauth-login-test-qa

If your branch name matches any of the above, it will run only the QA-related jobs. If it does not, the whole application test suite will run (including QA-related jobs).

How do we test GitLab?

We use Omnibus GitLab to build GitLab packages and then we test these packages using the GitLab QA orchestrator tool, which is a black-box testing framework for the API and the UI.

Testing nightly builds

We run scheduled pipeline each night to test nightly builds created by Omnibus. You can find these nightly pipelines at gitlab-org/quality/nightly/pipelines. Results are reported in the #qa-nightly Slack channel.

Testing staging

We run scheduled pipeline each night to test staging. You can find these nightly pipelines at gitlab-org/quality/staging/pipelines. Results are reported in the #qa-staging Slack channel.

Testing code in merge requests

Using the package-and-qa job

It is possible to run end-to-end tests for a merge request, eventually being run in a pipeline in the gitlab-qa project, by triggering the package-and-qa manual action in the test stage (not available for forks).

This runs end-to-end tests against a custom Omnibus package built from your merge request's changes.

Manual action that starts end-to-end tests is also available in merge requests in Omnibus GitLab.

Below you can read more about how to use it and how does it work.

How does it work?

Currently, we are using multi-project pipeline-like approach to run QA pipelines.

graph LR
    A1 -.->|1. Triggers an omnibus-gitlab pipeline and wait for it to be done| A2
    B2[`Trigger-qa` stage<br>`Trigger:qa-test` job] -.->|2. Triggers a gitlab-qa pipeline and wait for it to be done| A3

subgraph "gitlab-ce/ee pipeline"
    A1[`test` stage<br>`package-and-qa` job]
    end

subgraph "omnibus-gitlab pipeline"
    A2[`Trigger-docker` stage<br>`Trigger:gitlab-docker` job] -->|once done| B2
    end

subgraph "gitlab-qa pipeline"
    A3>QA jobs run] -.->|3. Reports back the pipeline result to the `package-and-qa` job<br>and post the result  on the original commit tested| A1
    end
  1. Developer triggers a manual action, that can be found in CE / EE merge requests. This starts a chain of pipelines in multiple projects.

  2. The script being executed triggers a pipeline in Omnibus GitLab and waits for the resulting status. We call this a status attribution.

  3. GitLab packages are being built in the Omnibus GitLab pipeline. Packages are then pushed to its Container Registry.

  4. When packages are ready, and available in the registry, a final step in the Omnibus GitLab pipeline, triggers a new GitLab QA pipeline (those with access can view them at https://gitlab.com/gitlab-org/gitlab-qa/pipelines). It also waits for a resulting status.

  5. GitLab QA pulls images from the registry, spins-up containers and runs tests against a test environment that has been just orchestrated by the gitlab-qa tool.

  6. The result of the GitLab QA pipeline is being propagated upstream, through Omnibus, back to the CE / EE merge request.

Using the review-qa-all jobs

On every pipeline during the test stage, the review-qa-smoke job is automatically started: it runs the QA smoke suite against the Review App.

You can also manually start the review-qa-all: it runs the full QA suite against the Review App.

This runs end-to-end tests against a Review App based on the official GitLab Helm chart, itself deployed with custom Cloud Native components built from your merge request's changes.

See Review Apps for more details about Review Apps.

How do I run the tests?

There are two main options for running the tests. If you simply want to run the existing tests against a live GitLab instance or against a pre-built docker image you can use the GitLab QA orchestrator. See also examples of the test scenarios you can run via the orchestrator.

On the other hand, if you would like to run against a local development GitLab environment, you can use the GitLab Development Kit (GDK). Please refer to the instructions in the QA README and the section below.

How do I write tests?

In order to write new tests, you first need to learn more about GitLab QA architecture. See the documentation about it.

Once you decided where to put test environment orchestration scenarios and instance-level scenarios, take a look at the GitLab QA README, the GitLab QA orchestrator README, and the already existing instance-level scenarios.

Continued reading:

Where can I ask for help?

You can ask question in the #quality channel on Slack (GitLab internal) or you can find an issue you would like to work on in the gitlab-ce issue tracker, the gitlab-ee issue tracker, or the gitlab-qa issue tracker.