mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-12-15 16:30:55 +01:00
docs/Single-server-VictoriaMetrics.md: clarify that the storage size depends on the number of samples per series
This commit is contained in:
parent
c9229e3c0b
commit
65b4ae95e3
@ -1113,7 +1113,8 @@ A rough estimation of the required resources for ingestion path:
|
|||||||
|
|
||||||
* Storage space: less than a byte per data point on average. So, ~260GB is required for storing a month-long insert stream
|
* Storage space: less than a byte per data point on average. So, ~260GB is required for storing a month-long insert stream
|
||||||
of 100K data points per second.
|
of 100K data points per second.
|
||||||
The actual storage size heavily depends on data randomness (entropy). Higher randomness means higher storage size requirements.
|
The actual storage size heavily depends on data randomness (entropy) and the average number of samples per time series.
|
||||||
|
Higher randomness means higher storage size requirements. Lower average number of samples per time series means higher storage requirement.
|
||||||
Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
|
Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
|
||||||
for details.
|
for details.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user