Monitor CI/CD Environment metrics

Once configured, GitLab will attempt to retrieve performance metrics for any environment which has had a successful deployment.

GitLab will automatically scan the Prometheus server for metrics from known servers like Kubernetes and NGINX, and attempt to identify individual environments. The supported metrics and scan process is detailed in our Prometheus Metrics Library documentation.

You can view the performance dashboard for an environment by clicking on the monitoring button.

Metrics dashboard visibility

GitLab 13.0で導入されました

You can set the visibility of the metrics dashboard to Only Project Members or Everyone With Access. When set to Everyone with Access, the metrics dashboard is visible to authenticated and non-authenticated users.

Adding custom metrics

Version history

Custom metrics can be monitored by adding them on the monitoring dashboard page. Once saved, they will be displayed on the environment performance dashboard provided that either:

Add New Metric

A few fields are required:

  • Name: Chart title
  • Type: Type of metric. Metrics of the same type will be shown together.
  • Query: Valid PromQL query.
  • Y-axis label: Y axis title to display on the dashboard.
  • Unit label: Query units, for example req / sec. Shown next to the value.

Multiple metrics can be displayed on the same chart if the fields Name, Type, and Y-axis label match between metrics. For example, a metric with Name Requests Rate, Type Business, and Y-axis label rec / sec would display on the same chart as a second metric with the same values. A Legend label is suggested if this feature is used.

Editing additional metrics from the dashboard

GitLab 12.9で導入されました

You can edit existing additional custom metrics by clicking the More actions dropdown and selecting Edit metric.

Edit metric

Setting up alerts for Prometheus metrics

Managed Prometheus instances

For managed Prometheus instances using auto configuration, alerts for metrics can be configured directly in the performance dashboard.

To set an alert:

  1. Click on the ellipsis icon in the top right corner of the metric you want to create the alert for.
  2. Choose Alerts
  3. Set threshold and operator.
  4. Click Add to save and activate the alert.

Adding an alert

To remove the alert, click back on the alert icon for the desired metric, and click Delete.

External Prometheus instances

Version history

For manually configured Prometheus servers, a notify endpoint is provided to use with Prometheus webhooks. If you have manual configuration enabled, an Alerts section is added to Settings > Integrations > Prometheus. This contains the URL and Authorization Key. The Reset Key button will invalidate the key and generate a new one.

Prometheus service configuration of Alerts

To send GitLab alert notifications, copy the URL and Authorization Key into the webhook_configs section of your Prometheus Alertmanager configuration:

receivers:
  name: gitlab
  webhook_configs:
    - http_config:
        bearer_token: 9e1cbfcd546896a9ea8be557caf13a76
      send_resolved: true
      url: http://192.168.178.31:3001/root/manual_prometheus/prometheus/alerts/notify.json
  ...

In order for GitLab to associate your alerts with an environment, you need to configure a gitlab_environment_name label on the alerts you set up in Prometheus. The value of this should match the name of your Environment in GitLab.

Note In GitLab versions 13.1 and greater, you can configure your manually configured Prometheus server to use the Generic alerts integration.

Taking action on incidents

Version history

Alerts can be used to trigger actions, like opening an issue automatically (disabled by default since 13.1). To configure the actions:

  1. Navigate to your project’s Settings > Operations > Incidents.
  2. Enable the option to create issues.
  3. Choose the issue template to create the issue from.
  4. Optionally, select whether to send an email notification to the developers of the project.
  5. Click Save changes.

Once enabled, an issue will be opened automatically when an alert is triggered which contains values extracted from alert’s payload:

  • Issue author: GitLab Alert Bot
  • Issue title: Extract from annotations/title, annotations/summary or labels/alertname
  • Alert Summary: A list of properties
    • starts_at: Alert start time via startsAt
    • full_query: Alert query extracted from generatorURL
    • Optional list of attached annotations extracted from annotations/*
  • Alert GFM: GitLab Flavored Markdown from annotations/gitlab_incident_markdown

When GitLab receives a Recovery Alert, it will automatically close the associated issue. This action will be recorded as a system message on the issue indicating that it was closed automatically by the GitLab Alert bot.

To further customize the issue, you can add labels, mentions, or any other supported quick action in the selected issue template, which will apply to all incidents. To limit quick actions or other information to only specific types of alerts, use the annotations/gitlab_incident_markdown field.

Since version 12.2, GitLab will tag each incident issue with the incident label automatically. If the label does not yet exist, it will be created automatically as well.

If the metric exceeds the threshold of the alert for over 5 minutes, an email will be sent to all Maintainers and Owners of the project.