mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-23 12:31:07 +01:00
docs: cross-link downsampling docs from deduplication and vmalert docs
This commit is contained in:
parent
b5b3c585b3
commit
ba1b3b8ef2
@ -1131,7 +1131,7 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||||||
|
|
||||||
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
||||||
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
||||||
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
|
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept. The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled.
|
||||||
|
|
||||||
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
||||||
|
|
||||||
|
@ -341,7 +341,7 @@ Check how to replace it with [cluster VictoriaMetrics](#cluster-victoriametrics)
|
|||||||
|
|
||||||
#### Downsampling and aggregation via vmalert
|
#### Downsampling and aggregation via vmalert
|
||||||
|
|
||||||
Example shows how to build a topology where `vmalert` will process data from one cluster
|
The following example shows how to build a topology where `vmalert` will process data from one cluster
|
||||||
and write results into another. Such clusters may be called as "hot" (low retention,
|
and write results into another. Such clusters may be called as "hot" (low retention,
|
||||||
high-speed disks, used for operative monitoring) and "cold" (long term retention,
|
high-speed disks, used for operative monitoring) and "cold" (long term retention,
|
||||||
slower/cheaper disks, low resolution data). With help of `vmalert`, user can setup
|
slower/cheaper disks, low resolution data). With help of `vmalert`, user can setup
|
||||||
@ -361,6 +361,8 @@ Please note, [replay](#rules-backfilling) feature may be used for transforming h
|
|||||||
|
|
||||||
Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only recording rules are used.
|
Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only recording rules are used.
|
||||||
|
|
||||||
|
See also [downsampling docs](https://docs.victoriametrics.com/#downsampling).
|
||||||
|
|
||||||
|
|
||||||
### Web
|
### Web
|
||||||
|
|
||||||
|
@ -1131,7 +1131,7 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||||||
|
|
||||||
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
||||||
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
||||||
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
|
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept. The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled.
|
||||||
|
|
||||||
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
||||||
|
|
||||||
|
@ -1135,7 +1135,7 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||||||
|
|
||||||
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-line flag
|
||||||
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
|
||||||
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
|
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept. The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled.
|
||||||
|
|
||||||
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
|
||||||
|
|
||||||
|
@ -345,7 +345,7 @@ Check how to replace it with [cluster VictoriaMetrics](#cluster-victoriametrics)
|
|||||||
|
|
||||||
#### Downsampling and aggregation via vmalert
|
#### Downsampling and aggregation via vmalert
|
||||||
|
|
||||||
Example shows how to build a topology where `vmalert` will process data from one cluster
|
The following example shows how to build a topology where `vmalert` will process data from one cluster
|
||||||
and write results into another. Such clusters may be called as "hot" (low retention,
|
and write results into another. Such clusters may be called as "hot" (low retention,
|
||||||
high-speed disks, used for operative monitoring) and "cold" (long term retention,
|
high-speed disks, used for operative monitoring) and "cold" (long term retention,
|
||||||
slower/cheaper disks, low resolution data). With help of `vmalert`, user can setup
|
slower/cheaper disks, low resolution data). With help of `vmalert`, user can setup
|
||||||
@ -365,6 +365,8 @@ Please note, [replay](#rules-backfilling) feature may be used for transforming h
|
|||||||
|
|
||||||
Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only recording rules are used.
|
Flags `-remoteRead.url` and `-notifier.url` are omitted since we assume only recording rules are used.
|
||||||
|
|
||||||
|
See also [downsampling docs](https://docs.victoriametrics.com/#downsampling).
|
||||||
|
|
||||||
|
|
||||||
### Web
|
### Web
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user