diff --git a/docs/VictoriaLogs/LogsQL.md b/docs/VictoriaLogs/LogsQL.md index c3f5ebf910..aca319cbdf 100644 --- a/docs/VictoriaLogs/LogsQL.md +++ b/docs/VictoriaLogs/LogsQL.md @@ -246,7 +246,7 @@ The list of LogsQL filters: ### Time filter VictoriaLogs scans all the logs per each query if it doesn't contain the filter on [`_time` field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field). -It uses various optimizations in order to speed up full scan queries without the `_time` filter, +It uses various optimizations in order to accelerate full scan queries without the `_time` filter, but such queries can be slow if the storage contains large number of logs over long time range. The easiest way to optimize queries is to narrow down the search with the filter on [`_time` field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field). diff --git a/docs/stream-aggregation.md b/docs/stream-aggregation.md index aa70aa00c9..027c7c6482 100644 --- a/docs/stream-aggregation.md +++ b/docs/stream-aggregation.md @@ -124,7 +124,7 @@ Sometimes [alerting queries](https://docs.victoriametrics.com/vmalert.html#alert disk IO and network bandwidth at metrics storage side. For example, if `http_request_duration_seconds` histogram is generated by thousands of application instances, then the alerting query `histogram_quantile(0.99, sum(increase(http_request_duration_seconds_bucket[5m])) without (instance)) > 0.5` can become slow, since it needs to scan too big number of unique [time series](https://docs.victoriametrics.com/keyConcepts.html#time-series) -with `http_request_duration_seconds_bucket` name. This alerting query can be speed up by pre-calculating +with `http_request_duration_seconds_bucket` name. This alerting query can be accelerated by pre-calculating the `sum(increase(http_request_duration_seconds_bucket[5m])) without (instance)` via [recording rule](https://docs.victoriametrics.com/vmalert.html#recording-rules). But this recording rule may take too much time to execute too. In this case the slow recording rule can be substituted with the following [stream aggregation config](#stream-aggregation-config): diff --git a/docs/vmbackup.md b/docs/vmbackup.md index 6165a28e4d..d5b6238346 100644 --- a/docs/vmbackup.md +++ b/docs/vmbackup.md @@ -14,7 +14,7 @@ aliases: `vmbackup` creates VictoriaMetrics data backups from [instant snapshots](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-work-with-snapshots). `vmbackup` supports incremental and full backups. Incremental backups are created automatically if the destination path already contains data from the previous backup. -Full backups can be speed up with `-origin` pointing to an already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared +Full backups can be accelerated with `-origin` pointing to an already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared data between the existing backup and new backup. It saves time and costs on data transfer. Backup process can be interrupted at any time. It is automatically resumed from the interruption point when restarting `vmbackup` with the same args. @@ -54,7 +54,7 @@ Regular backup can be performed with the following command: ### Regular backups with server-side copy from existing backup -If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be speed up +If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be accelerated with the following command: ```sh diff --git a/docs/vmbackupmanager.md b/docs/vmbackupmanager.md index 61c326ea7f..ed0dc9efe6 100644 --- a/docs/vmbackupmanager.md +++ b/docs/vmbackupmanager.md @@ -124,7 +124,7 @@ The result on the GCS bucket latest folder `vmbackupmanager` uses [smart backups](https://docs.victoriametrics.com/vmbackup.html#smart-backups) technique in order -to speed up backups and save both data transfer costs and data copying costs. This includes server-side copy of already existing +to accelerate backups and save both data transfer costs and data copying costs. This includes server-side copy of already existing objects. Typical object storage systems implement server-side copy by creating new names for already existing objects. This is very fast and efficient. Unfortunately there are systems such as [S3 Glacier](https://aws.amazon.com/s3/storage-classes/glacier/), which perform full object copy during server-side copying. This may be slow and expensive.