mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-23 12:31:07 +01:00
docs: replace speed up
with more clear accelerate
wording
This commit is contained in:
parent
d1d2771bee
commit
df5b73ed0d
@ -246,7 +246,7 @@ The list of LogsQL filters:
|
||||
### Time filter
|
||||
|
||||
VictoriaLogs scans all the logs per each query if it doesn't contain the filter on [`_time` field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field).
|
||||
It uses various optimizations in order to speed up full scan queries without the `_time` filter,
|
||||
It uses various optimizations in order to accelerate full scan queries without the `_time` filter,
|
||||
but such queries can be slow if the storage contains large number of logs over long time range. The easiest way to optimize queries
|
||||
is to narrow down the search with the filter on [`_time` field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field).
|
||||
|
||||
|
@ -124,7 +124,7 @@ Sometimes [alerting queries](https://docs.victoriametrics.com/vmalert.html#alert
|
||||
disk IO and network bandwidth at metrics storage side. For example, if `http_request_duration_seconds` histogram is generated by thousands
|
||||
of application instances, then the alerting query `histogram_quantile(0.99, sum(increase(http_request_duration_seconds_bucket[5m])) without (instance)) > 0.5`
|
||||
can become slow, since it needs to scan too big number of unique [time series](https://docs.victoriametrics.com/keyConcepts.html#time-series)
|
||||
with `http_request_duration_seconds_bucket` name. This alerting query can be speed up by pre-calculating
|
||||
with `http_request_duration_seconds_bucket` name. This alerting query can be accelerated by pre-calculating
|
||||
the `sum(increase(http_request_duration_seconds_bucket[5m])) without (instance)` via [recording rule](https://docs.victoriametrics.com/vmalert.html#recording-rules).
|
||||
But this recording rule may take too much time to execute too. In this case the slow recording rule can be substituted
|
||||
with the following [stream aggregation config](#stream-aggregation-config):
|
||||
|
@ -14,7 +14,7 @@ aliases:
|
||||
`vmbackup` creates VictoriaMetrics data backups from [instant snapshots](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-work-with-snapshots).
|
||||
|
||||
`vmbackup` supports incremental and full backups. Incremental backups are created automatically if the destination path already contains data from the previous backup.
|
||||
Full backups can be speed up with `-origin` pointing to an already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared
|
||||
Full backups can be accelerated with `-origin` pointing to an already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared
|
||||
data between the existing backup and new backup. It saves time and costs on data transfer.
|
||||
|
||||
Backup process can be interrupted at any time. It is automatically resumed from the interruption point when restarting `vmbackup` with the same args.
|
||||
@ -54,7 +54,7 @@ Regular backup can be performed with the following command:
|
||||
|
||||
### Regular backups with server-side copy from existing backup
|
||||
|
||||
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be speed up
|
||||
If the destination GCS bucket already contains the previous backup at `-origin` path, then new backup can be accelerated
|
||||
with the following command:
|
||||
|
||||
```sh
|
||||
|
@ -124,7 +124,7 @@ The result on the GCS bucket
|
||||
<img alt="latest folder" src="vmbackupmanager_latest_folder.webp">
|
||||
|
||||
`vmbackupmanager` uses [smart backups](https://docs.victoriametrics.com/vmbackup.html#smart-backups) technique in order
|
||||
to speed up backups and save both data transfer costs and data copying costs. This includes server-side copy of already existing
|
||||
to accelerate backups and save both data transfer costs and data copying costs. This includes server-side copy of already existing
|
||||
objects. Typical object storage systems implement server-side copy by creating new names for already existing objects.
|
||||
This is very fast and efficient. Unfortunately there are systems such as [S3 Glacier](https://aws.amazon.com/s3/storage-classes/glacier/),
|
||||
which perform full object copy during server-side copying. This may be slow and expensive.
|
||||
|
Loading…
Reference in New Issue
Block a user