diff --git a/README.md b/README.md index 4164ed8eb..7ecb4cfef 100644 --- a/README.md +++ b/README.md @@ -874,10 +874,9 @@ The most interesting metrics are: * `vm_rows{type="indexdb"}` - the number of rows in inverted index. High value for this number usually mean high churn rate for time series. * Sum of `vm_rows{type="storage/big"}` and `vm_rows{type="storage/small"}` - total number of `(timestamp, value)` data points in the database. -* Sum of all the `vm_cache_size_bytes` metrics - the total size of all the caches in the database. -* `vm_allowed_memory_bytes` - the maximum allowed size for caches in the database. It is calculated as `system_memory * <-memory.allowedPercent> / 100`, - where `system_memory` is the amount of system memory and `-memory.allowedPercent` is the corresponding flag value. * `vm_rows_inserted_total` - the total number of inserted rows since VictoriaMetrics start. +* `vm_free_disk_space_bytes` - free space left at `-storageDataPath`. +* `sum(vm_data_size_bytes)` - the total data size on disk. ### Troubleshooting @@ -893,7 +892,9 @@ The most interesting metrics are: * VictoriaMetrics requires free disk space for [merging data files to bigger ones](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). It may slow down when there is no enough free space left. So make sure `-storageDataPath` directory - has at least 20% of free space comparing to disk size. + has at least 20% of free space comparing to disk size. The remaining amount of free space + can be [monitored](#monitoring) via `vm_free_disk_space_bytes` metric. The total size of data + stored on the disk can be monitored via sum of `vm_data_size_bytes` metrics. * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, then just remove directories with broken parts. This will recover VictoriaMetrics at the cost diff --git a/app/vmstorage/main.go b/app/vmstorage/main.go index ac9b0a251..d01612a63 100644 --- a/app/vmstorage/main.go +++ b/app/vmstorage/main.go @@ -9,6 +9,7 @@ import ( "time" "github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding" + "github.com/VictoriaMetrics/VictoriaMetrics/lib/fs" "github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" @@ -247,6 +248,10 @@ func registerStorageMetrics() { return &sm.IndexDBMetrics } + metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), func() float64 { + return float64(fs.MustGetFreeSpace(*DataPath)) + }) + metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 { return float64(tm().ActiveBigMerges) }) diff --git a/docs/Single-server-VictoriaMetrics.md b/docs/Single-server-VictoriaMetrics.md index c183f85d3..66a7e1e23 100644 --- a/docs/Single-server-VictoriaMetrics.md +++ b/docs/Single-server-VictoriaMetrics.md @@ -864,10 +864,9 @@ The most interesting metrics are: * `vm_rows{type="indexdb"}` - the number of rows in inverted index. High value for this number usually mean high churn rate for time series. * Sum of `vm_rows{type="storage/big"}` and `vm_rows{type="storage/small"}` - total number of `(timestamp, value)` data points in the database. -* Sum of all the `vm_cache_size_bytes` metrics - the total size of all the caches in the database. -* `vm_allowed_memory_bytes` - the maximum allowed size for caches in the database. It is calculated as `system_memory * <-memory.allowedPercent> / 100`, - where `system_memory` is the amount of system memory and `-memory.allowedPercent` is the corresponding flag value. * `vm_rows_inserted_total` - the total number of inserted rows since VictoriaMetrics start. +* `vm_free_disk_space_bytes` - free space left at `-storageDataPath`. +* `sum(vm_data_size_bytes)` - the total data size on disk. ### Troubleshooting @@ -883,7 +882,9 @@ The most interesting metrics are: * VictoriaMetrics requires free disk space for [merging data files to bigger ones](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). It may slow down when there is no enough free space left. So make sure `-storageDataPath` directory - has at least 20% of free space comparing to disk size. + has at least 20% of free space comparing to disk size. The remaining amount of free space + can be [monitored](#monitoring) via `vm_free_disk_space_bytes` metric. The total size of data + stored on the disk can be monitored via sum of `vm_data_size_bytes` metrics. * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, then just remove directories with broken parts. This will recover VictoriaMetrics at the cost