VictoriaMetrics/docs/helm/victoria-metrics-distributed
Github Actions 73206473ff
Automatic update helm docs from VictoriaMetrics/helm-charts@428cb36 (#7072)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-09-23 08:56:54 +02:00
..
img
_changelog.md
_index.md
CHANGELOG.md
README.md

Type: application Version: 0.3.1 Artifact Hub Slack

A Helm chart for Running VMCluster on Multiple Availability Zones

Prerequisites

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial.

  • PV support on underlying infrastructure.

  • Multiple availability zones.

Chart Details

This chart sets up multiple VictoriaMetrics cluster instances on multiple availability zones, provides both global write and read entrypoints.

The default setup is as shown below:

victoriametrics-distributed-topology

For write:

  1. extra-vmagent(optional): scrapes external targets and all the components installed by this chart, sends data to global write entrypoint.
  2. vmauth-global-write: global write entrypoint, proxies requests to one of the zone vmagent with least_loaded policy.
  3. vmagent(per-zone): remote writes data to availability zones that enabled .Values.availabilityZones.allowIngest, and buffer data on disk when zone is unavailable to ingest.
  4. vmauth-write-balancer(per-zone): proxies requests to vminsert instances inside it's zone with least_loaded policy.
  5. vmcluster(per-zone): processes write requests and stores data.

For read:

  1. vmcluster(per-zone): processes query requests and returns results.
  2. vmauth-read-balancer(per-zone): proxies requests to vmselect instances inside it's zone with least_loaded policy.
  3. vmauth-read-proxy(per-zone): uses all the vmauth-read-balancer as servers if zone has .Values.availabilityZones.allowQuery enabled, always prefer "local" vmauth-read-balancer to reduce cross-zone traffic with first_available policy.
  4. vmauth-global-read: global query entrypoint, proxies requests to one of the zone vnauth-read-proxy with first_available policy.
  5. grafana(optional): uses vmauth-global-read as default datasource.

Note: As the topology shown above, this chart doesn't include components like vmalert, alertmanager, etc by default. You can install them using dependency victoria-metrics-k8s-stack or having separate release.

Why use victoria-metrics-distributed chart?

One of the best practice of running production kubernetes cluster is running with multiple availability zones. And apart from kubernetes control plane components, we also want to spread our application pods on multiple zones, to continue serving even if zone outage happens.

VictoriaMetrics supports data replication natively which can guarantees data availability when part of the vmstorage instances failed. But it doesn't works well if vmstorage instances are spread on multiple availability zones, since data replication could be stored on single availability zone, which will be lost when zone outage happens. To avoid this, vmcluster must be installed on multiple availability zones, each containing a 100% copy of data. As long as one zone is available, both global write and read entrypoints should work without interruption.

How to write data?

The chart provides vmauth-global-write as global write entrypoint, it supports push-based data ingestion protocols as VictoriaMetrics does. Optionally, you can push data to any of the per-zone vmagents, and they will replicate the received data across zones.

How to query data?

The chart provides vmauth-global-read as global read entrypoint, it picks the first available zone (see first_available policy) as it's preferred datasource and switches automatically to next zone if first one is unavailable, check vmauth first_available for more details. If you have services like vmalert or Grafana deployed in each zone, then configure them to use local vmauth-read-proxy. Per-zone vmauth-read-proxy always prefers "local" vmcluster for querying and reduces cross-zone traffic.

You can also pick other proxies like kubernetes service which supports Topology Aware Routing as global read entrypoint.

What happens if zone outage happen?

If availability zone zone-eu-1 is experiencing an outage, vmauth-global-write and vmauth-global-read will work without interruption:

  1. vmauth-global-write stops proxying write requests to zone-eu-1 automatically;
  2. vmauth-global-read and vmauth-read-proxy stops proxying read requests to zone-eu-1 automatically;
  3. vmagent on zone-us-1 fails to send data to zone-eu-1.vmauth-write-balancer, starts to buffer data on disk(unless -remoteWrite.disableOnDiskQueue is specified, which is not recommended for this topology); To keep data completeness for all the availability zones, make sure you have enough disk space on vmagent for buffer, see this doc for size recommendation.

And to avoid getting incomplete responses from zone-eu-1 which gets recovered from outage, check vmagent on zone-us-1 to see if persistent queue has been drained. If not, remove zone-eu-1 from serving query by setting .Values.availabilityZones.{zone-eu-1}.allowQuery=false and change it back after confirm all data are restored.

How to use multitenancy?

By default, all the data that written to vmauth-global-write belong to tenant 0. To write data to different tenants, set .Values.enableMultitenancy=true and create new tenant users for vmauth-global-write. For example, writing data to tenant 1088 with following steps:

  1. create tenant VMUser for vmauth vmauth-global-write to use:
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMUser
metadata:
  name: tenant-1088-rw
  labels:
    tenant-test: "true"
spec:
  targetRefs:
  - static:
      ## list all the zone vmagent here
      url: "http://vmagent-vmagent-zone-eu-1:8429"
      url: "http://vmagent-vmagent-zone-us-1:8429"
    paths:
    - "/api/v1/write"
    - "/prometheus/api/v1/write"
    - "/write"
    - "/api/v1/import"
    - "/api/v1/import/.+"
    target_path_suffix: /insert/1088/
  username: tenant-1088
  password: secret

Add extra VMUser selector in vmauth vmauth-global-write

spec:
  userSelector:
    matchLabels:
      tenant-test: "true"
  1. send data to vmauth-global-write using above token. Example command using vmagent:
/path/to/vmagent -remoteWrite.url=http://vmauth-vmauth-global-write-$ReleaseName-vm-distributed:8427/prometheus/api/v1/write -remoteWrite.basicAuth.username=tenant-1088 -remoteWrite.basicAuth.password=secret

How to install

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories)

Add a chart helm repository with follow commands:

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

List versions of vm/victoria-metrics-distributed chart available to installation:

helm search repo vm/victoria-metrics-distributed -l

Install victoria-metrics-distributed chart

Export default values of victoria-metrics-distributed chart to file values.yaml:

  • For HTTPS repository

    helm show values vm/victoria-metrics-distributed > values.yaml
    
  • For OCI repository

    helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-distributed > values.yaml
    

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

    helm install vmd vm/victoria-metrics-distributed -f values.yaml -n NAMESPACE --debug --dry-run
    
  • For OCI repository

    helm install vmd oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-distributed -f values.yaml -n NAMESPACE --debug --dry-run
    

Install chart with command:

  • For HTTPS repository

    helm install vmd vm/victoria-metrics-distributed -f values.yaml -n NAMESPACE
    
  • For OCI repository

    helm install vmd oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-distributed -f values.yaml -n NAMESPACE
    

Get the pods lists by running this commands:

kubectl get pods -A | grep 'vmd'

Get the application by running this command:

helm list -f vmd -n NAMESPACE

See the history of versions of vmd application with command.

helm history vmd -n NAMESPACE

How to upgrade

In order to serving query and ingestion while upgrading components version or changing configurations, it's recommended to perform maintenance on availability zone one by one. First, performing update on availability zone zone-eu-1:

  1. remove zone-eu-1 from serving query by setting .Values.availabilityZones.{zone-eu-1}.allowQuery=false;
  2. run helm upgrade vm-dis -n NAMESPACE with updated configurations for zone-eu-1 in values.yaml;
  3. wait for all the components on zone zone-eu-1 running;
  4. wait zone-us-1 vmagent persistent queue for zone-eu-1 been drained, add zone-eu-1 back to serving query by setting .Values.availabilityZones.{zone-eu-1}.allowQuery=true.

Then, perform update on availability zone zone-us-1 with the same steps1~4.

How to uninstall

Remove application with command.

helm uninstall vmd -n NAMESPACE

Documentation of Helm Chart

Install helm-docs following the instructions on this tutorial.

Generate docs with helm-docs command.

cd charts/victoria-metrics-distributed

helm-docs

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Parameters

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-metrics-distributed`/values.yaml file.

Key Type Default Description
availabilityZones[0].allowIngest bool
true

Allow data ingestion to this zone

availabilityZones[0].allowQuery bool
true

Allow data query from this zone through global query endpoint

availabilityZones[0].extraAffinity object
{}

Extra affinity adds user defined custom affinity rules

availabilityZones[0].name string
zone-eu-1

Availability zone name

availabilityZones[0].nodeSelector object
topology.kubernetes.io/zone: zone-eu-1

Node selector to restrict where pods of this zone can be placed. usually provided by cloud providers.

availabilityZones[0].topologySpreadConstraints list
- maxSkew: 1
  topologyKey: kubernetes.io/hostname
  whenUnsatisfiable: ScheduleAnyway

Topology spread constraints allows to customize the default topologySpreadConstraints.

availabilityZones[0].vmagent object
annotations: {}
enabled: true
name: ""
spec: {}

VMAgent here only meant to proxy write requests to each az, doesnt support customized other remote write address.

availabilityZones[0].vmagent.annotations object
{}

VMAgent annotations

availabilityZones[0].vmagent.name string
""

Override the name of the vmagent object

availabilityZones[0].vmagent.spec object
{}

VMAgent spec. More options can be found here

availabilityZones[0].vmauthCrossAZQuery.enabled bool
true

Create a vmauth with all the zone with allowQuery: true as query backends

availabilityZones[0].vmauthCrossAZQuery.name string
""

Override the name of the vmauth object

availabilityZones[0].vmauthCrossAZQuery.spec object
{}

VMAuth spec. More options can be found here

availabilityZones[0].vmauthIngest.enabled bool
true

Create vmauth as a local write endpoint

availabilityZones[0].vmauthIngest.name string
""

Override the name of the vmauth object

availabilityZones[0].vmauthIngest.spec object
extraArgs:
    discoverBackendIPs: "true"

VMAuth spec. More options can be found here

availabilityZones[0].vmauthQueryPerZone.enabled bool
true

Create vmauth as a local read endpoint

availabilityZones[0].vmauthQueryPerZone.name string
""

Override the name of the vmauth object

availabilityZones[0].vmauthQueryPerZone.spec object
extraArgs:
    discoverBackendIPs: "true"

VMAuth spec. More options can be found here

availabilityZones[0].vmcluster.enabled bool
true

availabilityZones[0].vmcluster.name string
""

Override the name of the vmcluster, by default is vmcluster-

availabilityZones[0].vmcluster.spec object
replicationFactor: 2
retentionPeriod: "14"
vminsert:
    extraArgs: {}
    replicaCount: 2
    resources: {}
vmselect:
    extraArgs: {}
    replicaCount: 2
    resources: {}
vmstorage:
    replicaCount: 2
    resources: {}
    storageDataPath: /vm-data

Spec for VMCluster CRD, see here

availabilityZones[1].allowIngest bool
true

Allow data ingestion to this zone

availabilityZones[1].allowQuery bool
true

Allow data query from this zone through global query endpoint

availabilityZones[1].extraAffinity object
{}

Extra affinity adds user defined custom affinity rules

availabilityZones[1].name string
zone-us-1

Availability zone name

availabilityZones[1].nodeSelector object
topology.kubernetes.io/zone: zone-us-1

Node selector to restrict where pods of this zone can be placed. usually provided by cloud providers.

availabilityZones[1].topologySpreadConstraints list
- maxSkew: 1
  topologyKey: kubernetes.io/hostname
  whenUnsatisfiable: ScheduleAnyway

Topology spread constraints allows to customize the default topologySpreadConstraints.

availabilityZones[1].vmagent object
annotations: {}
enabled: true
name: ""
spec: {}

VMAgent only meant to proxy write requests to each az, doesnt support customized remote write address

availabilityZones[1].vmagent.annotations object
{}

VMAgent annotations

availabilityZones[1].vmagent.name string
""

Override the name of the vmagent object

availabilityZones[1].vmagent.spec object
{}

VMAgent spec. More options can be found here

availabilityZones[1].vmauthCrossAZQuery.enabled bool
true

Create a vmauth with all the zone with allowQuery: true as query backends

availabilityZones[1].vmauthCrossAZQuery.name string
""

Override the name of the vmauth object

availabilityZones[1].vmauthCrossAZQuery.spec object
{}

VMAuth spec. More options can be found here

availabilityZones[1].vmauthIngest.enabled bool
true

Create vmauth as a local write endpoint

availabilityZones[1].vmauthIngest.name string
""

Override the name of the vmauth object

availabilityZones[1].vmauthIngest.spec object
extraArgs:
    discoverBackendIPs: "true"

VMAuth spec. More options can be found here

availabilityZones[1].vmauthQueryPerZone.enabled bool
true

Create vmauth as a local read endpoint

availabilityZones[1].vmauthQueryPerZone.name string
""

Override the name of the vmauth object

availabilityZones[1].vmauthQueryPerZone.spec object
extraArgs:
    discoverBackendIPs: "true"

VMAuth spec. More options can be found here

availabilityZones[1].vmcluster.enabled bool
true

availabilityZones[1].vmcluster.name string
""

Override the name of the vmcluster, by default is vmcluster-

availabilityZones[1].vmcluster.spec object
replicationFactor: 2
retentionPeriod: "14"
vminsert:
    extraArgs: {}
    replicaCount: 2
    resources: {}
vmselect:
    extraArgs: {}
    replicaCount: 2
    resources: {}
vmstorage:
    replicaCount: 2
    resources: {}
    storageDataPath: /vm-data

Spec for VMCluster CRD, see here

enableMultitenancy bool
false

Enable multitenancy mode see here

extraVMAgent object
enabled: true
name: test-vmagent
spec:
    selectAllByDefault: true

Set up an extra vmagent to scrape all the scrape objects by default, and write data to above vmauth-global-ingest endpoint.

fullnameOverride string
""

Overrides the charts computed fullname.

nameOverride string
vm-distributed

Overrides the charts name

victoria-metrics-k8s-stack object
alertmanager:
    enabled: false
crds:
    enabled: true
enabled: true
grafana:
    enabled: true
    sidecar:
        datasources:
            enabled: true
victoria-metrics-operator:
    enabled: true
vmagent:
    enabled: false
vmalert:
    enabled: false
vmcluster:
    enabled: false
vmsingle:
    enabled: false

Set up vm operator and other resources like vmalert, grafana if needed

vmauthIngestGlobal.enabled bool
true

Create a vmauth as the global write entrypoint

vmauthIngestGlobal.name string
""

Override the name of the vmauth object

vmauthIngestGlobal.spec object
{}

VMAuth spec. More options can be found here

vmauthQueryGlobal.enabled bool
true

Create vmauth as the global read entrypoint

vmauthQueryGlobal.name string
""

Override the name of the vmauth object

vmauthQueryGlobal.spec object
{}

VMAuth spec. More options can be found here