Readme cleanup (#2715)

* docs: minor styling and wording changes

Changes made after reading https://developers.google.com/tech-writing

Signed-off-by: hagen1778 <roman@victoriametrics.com>

* docs: set proper types for code blocks

Signed-off-by: hagen1778 <roman@victoriametrics.com>

* docs: add `copy` wrapper for some commands

Signed-off-by: hagen1778 <roman@victoriametrics.com>

* docs: sync

Signed-off-by: hagen1778 <roman@victoriametrics.com>

* docs: resolve conflicts

Signed-off-by: hagen1778 <roman@victoriametrics.com>
This commit is contained in:
Roman Khavronenko 2022-06-13 08:48:26 +02:00 committed by GitHub
parent 879670418f
commit cd2f0e0760
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 168 additions and 87 deletions

View File

@ -352,13 +352,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -460,13 +464,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -745,8 +753,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -839,7 +847,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -859,10 +867,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1035,7 +1047,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1053,29 +1065,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1245,17 +1269,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts` because of better compression rate. The merge process provides the following additional benefits: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part` contains fixed number of files * it reduces the number of data files, since each `part` contains fixed number of files
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling) * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are perfomed during the merge. and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1357,9 +1381,9 @@ or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1376,19 +1400,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1404,9 +1432,7 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
@ -1791,9 +1817,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View File

@ -352,13 +352,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -460,13 +464,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -745,8 +753,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -839,7 +847,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -859,10 +867,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1035,7 +1047,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1053,29 +1065,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1245,17 +1269,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts` because of better compression rate. The merge process provides the following additional benefits: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part` contains fixed number of files * it reduces the number of data files, since each `part` contains fixed number of files
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling) * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are perfomed during the merge. and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1357,9 +1381,9 @@ or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1376,19 +1400,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1404,9 +1432,7 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
@ -1791,9 +1817,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View File

@ -356,13 +356,17 @@ echo '
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format): The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
<div class="with-copy" markdown="1">
```bash ```bash
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1' curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
``` ```
</div>
This command should return the following output if everything is OK: This command should return the following output if everything is OK:
``` ```json
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]} {"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
``` ```
@ -464,13 +468,17 @@ VictoriaMetrics sets the current time if the timestamp is omitted.
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go.
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
``` ```
</div>
The `/api/v1/export` endpoint should return the following response: The `/api/v1/export` endpoint should return the following response:
```bash ```json
{"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]} {"metric":{"__name__":"foo.bar.baz","tag1":"value1","tag2":"value2"},"values":[123],"timestamps":[1560277406000]}
``` ```
@ -749,8 +757,8 @@ More details may be found [here](https://github.com/VictoriaMetrics/VictoriaMetr
## Setting up service ## Setting up service
Read [these instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics as a service in your OS. Read [instructions](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/43) on how to set up VictoriaMetrics
There is also [snap package for Ubuntu](https://snapcraft.io/victoriametrics). as a service for your OS. A [snap package](https://snapcraft.io/victoriametrics) is available for Ubuntu.
## How to work with snapshots ## How to work with snapshots
@ -843,7 +851,7 @@ for metrics to export. Use `{__name__!=""}` selector for fetching all the time s
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON). The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
Each JSON line contains samples for a single time series. An example output: Each JSON line contains samples for a single time series. An example output:
```jsonl ```json
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]} {"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]} {"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}
``` ```
@ -863,10 +871,14 @@ In this case the output may contain multiple lines with samples for the same tim
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz
``` ```
</div>
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format). Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
@ -1039,7 +1051,7 @@ curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
The following response should be returned: The following response should be returned:
```bash ```json
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]} {"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]} {"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]} {"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
@ -1057,29 +1069,41 @@ VictoriaMetrics accepts data in [Prometheus exposition format](https://github.co
and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md)
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics: via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
<div class="with-copy" markdown="1">
```bash ```bash
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
``` ```
</div>
The following command may be used for verifying the imported data: The following command may be used for verifying the imported data:
<div class="with-copy" markdown="1">
```bash ```bash
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
``` ```
</div>
It should return something like the following: It should return something like the following:
``` ```json
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]} {"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
``` ```
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data:
<div class="with-copy" markdown="1">
```bash ```bash
# Import gzipped data to <destination-victoriametrics>: # Import gzipped data to <destination-victoriametrics>:
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz
``` ```
</div>
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args. Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
@ -1249,17 +1273,17 @@ Information about merging process is available in [single-node VictoriaMetrics](
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards. and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
See more details in [monitoring docs](#monitoring). See more details in [monitoring docs](#monitoring).
The `merge` process is usually named "compaction", because the resulting `part` size is usually smaller than The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
the sum of the source `parts` because of better compression rate. The merge process provides the following additional benefits: Benefits of doing the merge process are the following:
* it improves query performance, since lower number of `parts` are inspected with each query * it improves query performance, since lower number of `parts` are inspected with each query
* it reduces the number of data files, since each `part` contains fixed number of files * it reduces the number of data files, since each `part` contains fixed number of files
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling) * various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are perfomed during the merge. and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
Newly added `parts` either appear in the storage or fail to appear. Newly added `parts` either appear in the storage or fail to appear.
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
merged into a new `part` or fail to merge. There are no partially merged `parts` in MergeTree. merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge `Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`. to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
@ -1361,9 +1385,9 @@ or similar auth proxy.
## Tuning ## Tuning
* There is no need for VictoriaMetrics tuning since it uses reasonable defaults for command-line flags, * No need in tuning for VictoriaMetrics - it uses reasonable defaults for command-line flags,
which are automatically adjusted for the available CPU and RAM resources. which are automatically adjusted for the available CPU and RAM resources.
* There is no need for Operating System tuning since VictoriaMetrics is optimized for default OS settings. * No need in tuning for Operating System - VictoriaMetrics is optimized for default OS settings.
The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a). The only option is increasing the limit on [the number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a).
The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk. The recommendation is not specific for VictoriaMetrics only but also for any service which handles many HTTP connections and stores data on disk.
* VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other * VictoriaMetrics is a write-heavy application and its performance depends on disk performance. So be careful with other
@ -1380,19 +1404,23 @@ mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
## Monitoring ## Monitoring
VictoriaMetrics exports internal metrics in Prometheus format at `/metrics` page. VictoriaMetrics exports internal metrics in Prometheus exposition format at `/metrics` page.
These metrics may be collected by [vmagent](https://docs.victoriametrics.com/vmagent.html) These metrics can be scraped via [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus.
or Prometheus by adding the corresponding scrape config to it. Alternatively, single-node VictoriaMetrics can self-scrape the metrics when `-selfScrapeInterval` command-line flag is
Alternatively they can be self-scraped by setting `-selfScrapeInterval` command-line flag to duration greater than 0. set to duration greater than 0. For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page
For example, `-selfScrapeInterval=10s` would enable self-scraping of `/metrics` page with 10 seconds interval. with 10 seconds interval.
There are officials Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229) and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176). There is also an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831). Official Grafana dashboards available for [single-node](https://grafana.com/dashboards/10229)
and [clustered](https://grafana.com/grafana/dashboards/11176) VictoriaMetrics.
See an [alternative dashboard for clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11831)
created by community.
Graphs on these dashboard contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it. Graphs on the dashboards contain useful hints - hover the `i` icon in the top left corner of each graph to read it.
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml). We recommend setting up [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
via [vmalert](https://docs.victoriametrics.com/vmalert.html) or via Prometheus.
The most interesting metrics are: The most interesting health metrics are the following:
* `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour * `vm_cache_entries{type="storage/hour_metric_ids"}` - the number of time series with new data points during the last hour
aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). aka [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
@ -1408,9 +1436,7 @@ The most interesting metrics are:
If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling If this number remains high during extended periods of time, then it is likely more RAM is needed for optimal handling
of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series). of the current number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
VictoriaMetrics also exposes currently running queries with their execution times at `/api/v1/status/active_queries` page. VictoriaMetrics exposes currently running queries and their execution times at `/api/v1/status/active_queries` page.
See the example of alerting rules for VM components [here](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml).
## TSDB stats ## TSDB stats
@ -1795,9 +1821,10 @@ Files included in each folder:
### We kindly ask ### We kindly ask
* Please don't use any other font instead of suggested. * Please don't use any other font instead of suggested.
* There should be sufficient clear space around the logo. * To keep enough clear space around the logo.
* Do not change spacing, alignment, or relative locations of the design elements. * Do not change spacing, alignment, or relative locations of the design elements.
* Do not change the proportions of any of the design elements or the design itself. You may resize as needed but must retain all proportions. * Do not change the proportions for any of the design elements or the design itself.
You may resize as needed but must retain all proportions.
## List of command-line flags ## List of command-line flags

View File

@ -74,7 +74,7 @@ Run this command in your terminal:
<div class="with-copy" markdown="1">.html <div class="with-copy" markdown="1">.html
```yaml ```bash
helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml helm install vmsingle vm/victoria-metrics-single -f https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml
``` ```

View File

@ -145,7 +145,7 @@ vm_per_query_rows_processed_count_count 11
In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way: In practice, histogram `vm_per_query_rows_processed_count` may be used in the following way:
```Go ```go
// define the histogram // define the histogram
perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`) perQueryRowsProcessed := metrics.NewHistogram(`vm_per_query_rows_processed_count`)