mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-12-30 07:40:06 +01:00
bb7a419cc3
- Maintain a separate worker pool per each part type (in-memory, file, big and small).
Previously a shared pool was used for merging all the part types.
A single merge worker could merge parts with mixed types at once. For example,
it could merge simultaneously an in-memory part plus a big file part.
Such a merge could take hours for big file part. During the duration of this merge
the in-memory part was pinned in memory and couldn't be persisted to disk
under the configured -inmemoryDataFlushInterval .
Another common issue, which could happen when parts with mixed types are merged,
is uncontrolled growth of in-memory parts or small parts when all the merge workers
were busy with merging big files. Such growth could lead to significant performance
degradataion for queries, since every query needs to check ever growing list of parts.
This could also slow down the registration of new time series, since VictoriaMetrics
searches for the internal series_id in the indexdb for every new time series.
The third issue is graceful shutdown duration, which could be very long when a background
merge is running on in-memory parts plus big file parts. This merge couldn't be interrupted,
since it merges in-memory parts.
A separate pool of merge workers per every part type elegantly resolves both issues:
- In-memory parts are merged to file-based parts in a timely manner, since the maximum
size of in-memory parts is limited.
- Long-running merges for big parts do not block merges for in-memory parts and small parts.
- Graceful shutdown duration is now limited by the time needed for flushing in-memory parts to files.
Merging for file parts is instantly canceled on graceful shutdown now.
- Deprecate -smallMergeConcurrency command-line flag, since the new background merge algorithm
should automatically self-tune according to the number of available CPU cores.
- Deprecate -finalMergeDelay command-line flag, since it wasn't working correctly.
It is better to run forced merge when needed - https://docs.victoriametrics.com/#forced-merge
- Tune the number of shards for pending rows and items before the data goes to in-memory parts
and becomes visible for search. This improves the maximum data ingestion rate and the maximum rate
for registration of new time series. This should reduce the duration of data ingestion slowdown
in VictoriaMetrics cluster on e.g. re-routing events, when some of vmstorage nodes become temporarily
unavailable.
- Prevent from possible "sync: WaitGroup misuse" panic on graceful shutdown.
This is a follow-up for
|
||
---|---|---|
.. | ||
appmetrics | ||
auth | ||
awsapi | ||
backup | ||
blockcache | ||
bloomfilter | ||
bufferedwriter | ||
buildinfo | ||
bytesutil | ||
cgroup | ||
decimal | ||
encoding | ||
envflag | ||
envtemplate | ||
fastnum | ||
fasttime | ||
filestream | ||
flagutil | ||
formatutil | ||
fs | ||
htmlcomponents | ||
httpserver | ||
httputils | ||
influxutils | ||
ingestserver | ||
leveledbytebufferpool | ||
logger | ||
logjson | ||
logstorage | ||
lrucache | ||
memory | ||
mergeset | ||
metricsql | ||
netutil | ||
persistentqueue | ||
procutil | ||
promauth | ||
prompb | ||
prompbmarshal | ||
promrelabel | ||
promscrape | ||
promutils | ||
protoparser | ||
proxy | ||
pushmetrics | ||
querytracer | ||
regexutil | ||
snapshot | ||
storage | ||
streamaggr | ||
stringsutil | ||
syncwg | ||
tenantmetrics | ||
timerpool | ||
timeutil | ||
uint64set | ||
workingsetcache | ||
writeconcurrencylimiter |