From 352485b0dee4024a05506519a2a063ee1b7f7540 Mon Sep 17 00:00:00 2001 From: Aliaksandr Valialkin Date: Wed, 11 Nov 2020 12:32:25 +0200 Subject: [PATCH] docs/Single-server-VictoriaMetrics.md: clarify which directories can be removed when recovering from data corruption --- README.md | 8 +++----- docs/Single-server-VictoriaMetrics.md | 8 +++----- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index dfc4f5d31..bb203cc38 100644 --- a/README.md +++ b/README.md @@ -1203,8 +1203,6 @@ VictoriaMetrics also exposes currently running queries with their execution time VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM. It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve ingestion and query performance in this case. - Another option is to increase `-memory.allowedPercent` command-line flag value. Be careful with this - option, since too big value for `-memory.allowedPercent` may result in high I/O usage. * VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion, then data querying may slow down significantly. @@ -1219,9 +1217,9 @@ VictoriaMetrics also exposes currently running queries with their execution time which would start background merge if they had more free disk space. * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, - then just remove directories with broken parts. This will recover VictoriaMetrics at the cost - of data loss stored in the broken parts. In the future, `vmrecover` tool will be created - for automatic recovering from such errors. + then just remove directories with broken parts. It is safe removing subdirectories under `<-storageDataPath>/data/{big,small}/YYYY_MM` directories + when VictoriaMetrics isn't running. This recovers VictoriaMetrics at the cost of data loss stored in the deleted broken parts. + In the future, `vmrecover` tool will be created for automatic recovering from such errors. * If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`. If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset` diff --git a/docs/Single-server-VictoriaMetrics.md b/docs/Single-server-VictoriaMetrics.md index dfc4f5d31..bb203cc38 100644 --- a/docs/Single-server-VictoriaMetrics.md +++ b/docs/Single-server-VictoriaMetrics.md @@ -1203,8 +1203,6 @@ VictoriaMetrics also exposes currently running queries with their execution time VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM. It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve ingestion and query performance in this case. - Another option is to increase `-memory.allowedPercent` command-line flag value. Be careful with this - option, since too big value for `-memory.allowedPercent` may result in high I/O usage. * VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion, then data querying may slow down significantly. @@ -1219,9 +1217,9 @@ VictoriaMetrics also exposes currently running queries with their execution time which would start background merge if they had more free disk space. * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, - then just remove directories with broken parts. This will recover VictoriaMetrics at the cost - of data loss stored in the broken parts. In the future, `vmrecover` tool will be created - for automatic recovering from such errors. + then just remove directories with broken parts. It is safe removing subdirectories under `<-storageDataPath>/data/{big,small}/YYYY_MM` directories + when VictoriaMetrics isn't running. This recovers VictoriaMetrics at the cost of data loss stored in the deleted broken parts. + In the future, `vmrecover` tool will be created for automatic recovering from such errors. * If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`. If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset`