docs/Single-server-VictoriaMetrics.md: clarify that the storage size depends on the number of samples per series

This commit is contained in:
Aliaksandr Valialkin 2021-05-24 15:47:44 +03:00
parent b1e8d92577
commit a47d4927d2
2 changed files with 4 additions and 2 deletions

View File

@ -1109,7 +1109,8 @@ A rough estimation of the required resources for ingestion path:
* Storage space: less than a byte per data point on average. So, ~260GB is required for storing a month-long insert stream
of 100K data points per second.
The actual storage size heavily depends on data randomness (entropy). Higher randomness means higher storage size requirements.
The actual storage size heavily depends on data randomness (entropy) and the average number of samples per time series.
Higher randomness means higher storage size requirements. Lower average number of samples per time series means higher storage requirement.
Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
for details.

View File

@ -1113,7 +1113,8 @@ A rough estimation of the required resources for ingestion path:
* Storage space: less than a byte per data point on average. So, ~260GB is required for storing a month-long insert stream
of 100K data points per second.
The actual storage size heavily depends on data randomness (entropy). Higher randomness means higher storage size requirements.
The actual storage size heavily depends on data randomness (entropy) and the average number of samples per time series.
Higher randomness means higher storage size requirements. Lower average number of samples per time series means higher storage requirement.
Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
for details.