Commit Graph

2455 Commits

Author SHA1 Message Date
Aliaksandr Valialkin
138a4d1c2b
lib/streamaggr: ignore the first sample in new time series during staleness_interval seconds after the stream aggregation start for total and increase outputs 2024-03-04 01:49:26 +02:00
Aliaksandr Valialkin
0422ae01ba
lib/streamaggr: flush dedup state and aggregation state in parallel on all the available CPU cores
This should reduce the time needed for aggregation state flush on systems with many CPU cores
2024-03-04 01:21:50 +02:00
Aliaksandr Valialkin
3c06b3af92
lib/streamaggr: add a benchmark for flushing dedup state 2024-03-04 01:16:30 +02:00
Aliaksandr Valialkin
9648c88b71
lib/streamaggr: add a benchmark for measuring the performance of aggregator.flush 2024-03-04 00:45:48 +02:00
Aliaksandr Valialkin
54a1c506e3
lib/streamaggr: add a benchmark for de-duplicating of 1M samples 2024-03-04 00:26:59 +02:00
Aliaksandr Valialkin
614d34e539
lib/prompbmarshal: use clear() instead of a loop for clearing tss inside ResetTimeSeries() 2024-03-03 23:40:34 +02:00
Aliaksandr Valialkin
4e65636b44
lib/promutils: optimize LabelsCompressor.Decompress by using a specialized labelsMap struct instead of sync.Map
The labelsMap struct employs the fact that label indexes are condensed around 0,
so it stores the referred labels in a slice instead of map and uses slice index as label key.
This allows increasing the LabelsCompressor.Decompress performance by up to 3x.
This also reduces the latency of data flush in stream aggregation.
2024-03-03 23:21:25 +02:00
Aliaksandr Valialkin
28a9e92b5e
lib/streamaggr: huge pile of changes
- Reduce memory usage by up to 5x when de-duplicating samples across big number of time series.
- Reduce memory usage by up to 5x when aggregating across big number of output time series.
- Add lib/promutils.LabelsCompressor, which is going to be used by other VictoriaMetrics components
  for reducing memory usage for marshaled []prompbmarshal.Label.
- Add `dedup_interval` option at aggregation config, which allows setting individual
  deduplication intervals per each aggregation.
- Add `keep_metric_names` option at aggregation config, which allows keeping the original
  metric names in the output samples.
- Add `unique_samples` output, which counts the number of unique sample values.
- Add `increase_prometheus` and `total_prometheus` outputs, which ignore the first sample
  per each newly encountered time series.
- Use 64-bit hashes instead of marshaled labels as map keys when calculating `count_series` output.
  This makes obsolete https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5579
- Expose various metrics, which may help debugging stream aggregation:
  - vm_streamaggr_dedup_state_size_bytes - the size of data structures responsible for deduplication
  - vm_streamaggr_dedup_state_items_count - the number of items in the deduplication data structures
  - vm_streamaggr_labels_compressor_size_bytes - the size of labels compressor data structures
  - vm_streamaggr_labels_compressor_items_count - the number of entries in the labels compressor
  - vm_streamaggr_flush_duration_seconds - a histogram, which shows the duration of stream aggregation flushes
  - vm_streamaggr_dedup_flush_duration_seconds - a histogram, which shows the duration of deduplication flushes
  - vm_streamaggr_flush_timeouts_total - counter for timed out stream aggregation flushes,
    which took longer than the configured interval
  - vm_streamaggr_dedup_flush_timeouts_total - counter for timed out deduplication flushes,
    which took longer than the configured dedup_interval
- Actualize docs/stream-aggregation.md

The memory usage reduction increases CPU usage during stream aggregation by up to 30%.

This commit is based on https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5850
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5898
2024-03-02 02:42:50 +02:00
Aliaksandr Valialkin
eb8e95516f
lib/streamaggr: allow one second aggregation interval 2024-03-01 21:33:16 +02:00
Aliaksandr Valialkin
cf2e80a869
lib/promrelabel: use clear() function inside CleanLabels() 2024-03-01 21:33:15 +02:00
Aliaksandr Valialkin
c8c2c5f8e5
lib/fs: fix GOOS=windows build after f8baf29b6e 2024-03-01 01:46:29 +02:00
Aliaksandr Valialkin
5aa3dfbd20
lib/protoparser/opentelemetry/firehose: verify that the full response is parsed properly in ProcessRequestBody
This is a follow-up for bf9cb84575
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5899
2024-03-01 00:39:10 +02:00
Andrii Chubatiuk
bf9cb84575
opentelemetry: fix firehose message parsing (#5899)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-03-01 00:23:54 +02:00
Aliaksandr Valialkin
6a8dc74ee7
lib/mergeset: use unsafe.Slice and unsafe.String instead of deprecated reflect.SliceHeader with unsafe conversion from slice header to string header 2024-02-29 17:29:33 +02:00
Aliaksandr Valialkin
38e0397ebd
lib/bytesutil: use unsafe.String instead of unsafe conversion of slice header to string header 2024-02-29 17:27:51 +02:00
Aliaksandr Valialkin
e959f54351
lib/fs: properly handle the case when data=nil is passed to mUnmap 2024-02-29 17:26:07 +02:00
Aliaksandr Valialkin
c75bfd5b07
lib/storage: use unsafe.Slice instead of deprecated reflect.SliceHeader 2024-02-29 17:24:34 +02:00
Aliaksandr Valialkin
bb48d416fc
lib/protoparser/csvimport: unse unsafe.Slice instead of deprecated reflect.SliceHeader 2024-02-29 17:19:57 +02:00
Aliaksandr Valialkin
f8baf29b6e
lib/fs: use unsafe.Slice instead of deprecated reflect.SliceHeader 2024-02-29 17:18:33 +02:00
Aliaksandr Valialkin
7a04f99c72
lib/fastnum: use unsafe.Slice() instead of deprecated reflect.SliceHeader 2024-02-29 17:17:13 +02:00
Aliaksandr Valialkin
a3cf3d7de1
lib/bytesutil: make BenchmarkToUnsafeString and BenchmarkToUnsafeBytes more reliable
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5880
2024-02-29 17:11:03 +02:00
helen
8266b77d0e
Optimize TouUnsafeBytes to make it leaner, more standards-compliant and (#5880)
slightly faster.
2024-02-29 17:10:10 +02:00
XLONG96
a5795f533d
lib/logstorage: avoid panic when parsing regex with stream filter (#5897) 2024-02-29 15:31:54 +02:00
Aliaksandr Valialkin
04d13f6149
app/{vminsert,vmagent}: follow-up after 67a55b89a4
- Document the ability to read OpenTelemetry data from Amazon Firehose at docs/CHANGELOG.md

- Simplify parsing Firehose data. There is no need in trying to optimize the parsing with fastjson
  and byte slice tricks, since OpenTelemetry protocol is really slooow because of over-engineering.
  It is better to write clear code for better maintanability in the future.

- Move Firehose parser from /lib/protoparser/firehose to lib/protoparser/opentelemetry/firehose,
  since it is used only by opentelemetry parser.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5893
2024-02-29 14:38:23 +02:00
Andrii Chubatiuk
67a55b89a4
{vmagent,vminsert}: added firehose http destination opentelemetry data ingestion support (#5893)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-02-29 14:03:24 +02:00
Aliaksandr Valialkin
6f203ebc9f
lib/streamaggr: make the BenchmarkAggregatorsPushByJobAvg closer to production case with long list of labels per sample 2024-02-29 02:39:16 +02:00
Hui Wang
8c33ba537a
chore: add actual request size in error message (#5889) 2024-02-28 22:33:08 +08:00
Aliaksandr Valialkin
7e1dd8ab9d
lib: consistently use atomic.* types instead of atomic.* functions
See ea9e2b19a5
2024-02-24 02:07:53 +02:00
Aliaksandr Valialkin
d5ca67e667
lib/backup/actions: expose vm_backups_downloaded_bytes_total metric in order to be consistent with vm_backups_uploaded_bytes_total metric 2024-02-24 01:14:50 +02:00
Aliaksandr Valialkin
906a35bdbb
lib/backup/actions: update vm_backups_uploaded_bytes_total metric along the file upload instead of after the file upload
This solves two issues:

1. The vm_backups_uploaded_bytes_total metric will grow more smoothly
2. This prevents from int overflow at metrics.Counter.Add() when uploading files bigger than 2GiB
2024-02-24 01:07:20 +02:00
Aliaksandr Valialkin
ece86cd314
lib/backup/actions: consistently use atomic.* types instead of atomic.* functions
See ea9e2b19a5
2024-02-24 01:02:21 +02:00
Aliaksandr Valialkin
55f1f24e62
lib/storage: replace the remaining atomic.* functions with atomic.* types for the sake of consistency
See ea9e2b19a5
2024-02-24 00:53:30 +02:00
Aliaksandr Valialkin
b3d9d36fb3
lib/storage: consistently use atomic.* types instead of atomic.* function calls on ordinary types
See ea9e2b19a5
2024-02-24 00:15:26 +02:00
Aliaksandr Valialkin
4617dc8bbe
lib/logstorage: consistently use atomic.* types instead of atomic.* functions on regular types
See ea9e2b19a5
2024-02-23 23:46:13 +02:00
Aliaksandr Valialkin
f81b480905
lib/mergeset: consistently use atomic.* types instead of atomic.* function calls on ordinary types
See ea9e2b19a5
2024-02-23 23:29:35 +02:00
Aliaksandr Valialkin
275335c181
lib/logstorage: consistently use atomic.* type for refCount and mustDrop fields in datadb and storage structs in the same way as it is used in lib/storage
See ea9e2b19a5 and a204fd69f1
2024-02-23 23:04:42 +02:00
Aliaksandr Valialkin
5c89150fc9
lib/mergeset: consistently use atomic.* type for refCount and mustDrop fields in table struct in the same way as it is used in lib/storage
See ea9e2b19a5 and a204fd69f1
2024-02-23 22:59:23 +02:00
Aliaksandr Valialkin
a204fd69f1
lib/storage: consistently use atomic.* type for refCount and mustDrop fields in indexDB, table and partition structs
See ea9e2b19a5
2024-02-23 22:54:59 +02:00
Aliaksandr Valialkin
0f1ea36dc8
lib/storage: convert dedupsDuringMerge from uint64 to atomic.Uint64
This should simplify code maintenance by gradually converting to atomic.* types instead of calling atomic.* functions
on int and bool types.

See ea9e2b19a5
2024-02-23 22:52:00 +02:00
Aliaksandr Valialkin
ea9e2b19a5
lib/{storage,mergeset}: properly fix 'unaligned 64-bit atomic operation' panic on 32-bit architectures
The issue has been introduced in bace9a2501
The improper fix was in the d4c0615dcd ,
since it fixed the issue just by an accident, because Go comiler aligned the rawRowsShards field
by 4-byte boundary inside partition struct.

The proper fix is to use atomic.Int64 field - this guarantees that the access to this field
won't result in unaligned 64-bit atomic operation. See https://github.com/golang/go/issues/50860
and https://github.com/golang/go/issues/19057
2024-02-23 22:27:06 +02:00
Aliaksandr Valialkin
cf94522389
lib/httpserver: return back the default value for -http.connTimeout to 2 minutes
It has been appeared that there are VictoriaMetrics users, who rely on the fact that
VictoriaMetrics components were closing incoming connections to -httpListenAddr every 2 minutes
by default. So let's return back this value by default in order to fix the breaking change
made at d8c1db7953 .

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1304#issuecomment-1961891450 .
2024-02-23 22:03:37 +02:00
hagen1778
c8d1d2ab72
lib/storage: cleanup after d4c0615dcd
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-02-23 18:53:55 +01:00
Dmytro Kozlov
d4c0615dcd
lib/storage: fix aligning (#5860) 2024-02-23 16:37:21 +01:00
Aliaksandr Valialkin
9bad52b687
app/vmstorage: deprecate -snapshotCreateTimeout command-line flag
Creating snapshot shouldn't time out under normal conditions.
The timeout was related to the bug, which has been fixed in 6460475e3b .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3551
2024-02-23 04:49:23 +02:00
Aliaksandr Valialkin
f79944532b
lib/storage: do not drop (date, metricID) entries for the date older than 2 days if samples are ingested at this date
Previously the (date, metricID) entries for dates older than the last 2 days were removed.
This could lead to slow check for the (date, metricID) entry in the indexdb during ingesting historical data (aka backfilling).

The issue has been introduced in 431aa16c8d
2024-02-23 04:06:19 +02:00
Aliaksandr Valialkin
f46eaf92eb
app/vmselect: add -search.maxLabelsAPIDuration and -search.maxLabelsAPISeries options for fine-tuning CPU and RAM usage for /api/v1/series , /api/v1/labels and /api/v1/label/.../values
This commit returns back limits for these endpoints, which have been removed at 5d66ee88bd ,
since it has been appeared that missing limits result in high CPU usage, while the introduced concurrency limiter
results in failed lightweight requests to these endpoints because of timeout when heavyweight requests are executed.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5055
2024-02-23 02:57:16 +02:00
Aliaksandr Valialkin
df7d3c55ed
lib/promutils: hide the math.Round() logic inside ParseTimeMsec() function
This should prevent from bugs similar to https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5801 in the future

This is a follow-up for ce3ec3ff2e
2024-02-23 00:55:32 +02:00
Aliaksandr Valialkin
5934002b57
lib/mergeset: run go fmt after bace9a2501 2024-02-23 00:53:28 +02:00
Aliaksandr Valialkin
bace9a2501
lib/{mergeset,storage}: convert bufferred items to searchable parts more optimally
Do not convert shard items to part when a shard becomes full. Instead, collect multiple
full shards and then convert them to a searchable part at once. This reduces
the number of searchable parts, which, in turn, should increase query performance,
since queries need to scan smaller number of parts.
2024-02-23 00:16:34 +02:00
Nikolay
07855de142
app/vmselect: change export/csv timestamp format for rfc3339 to respect milliseconds (#5853)
* app/vmselect: adds milliseconds to the csv export response for rfc3339
* milliseconds is a standard prescion for VictoriaMetrics query request responses
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5837

* app/victoria-metrics: adds tests for csv export/import
follow-up after 3541a8d0cf96dd4f8563624c4aab6816615d0756


---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2024-02-22 20:31:22 +01:00
Aliaksandr Valialkin
e8b3045062
lib/storage: handle common case when the number of rows passed to flushRowsToInmemoryParts() doesnt exceed maxRawRowsPerShard 2024-02-22 20:44:11 +02:00
Aliaksandr Valialkin
73f0a805e2
lib/{storage,mergeset}: convert beffered items into searchable in-memory parts exactly once per the given flush interval
Previously the interval between item addition and its conversion to searchable in-memory part
could vary significantly because of too coarse per-second precision. Switch from fasttime.UnixTimestamp()
to time.Now().UnixMilli() for millisecond precision. It is OK to use time.Now() for tracking
the time when buffered items must be converted to searchable in-memory parts, since time.Now()
calls aren't located in hot paths.

Increase the flush interval for converting buffered samples to searchable in-memory parts
from one second to two seconds. This should reduce the number of blocks, which are needed
to be processed during high-frequency alerting queries. This, in turn, should reduce CPU usage.

While at it, hardcode the maximum size of rawRows shard to 8Mb, since this size gives the optimal
data ingestion pefromance according to load tests. This reduces memory usage and CPU usage on systems
with big amounts of RAM under high data ingestion rate.
2024-02-22 20:21:14 +02:00
Aliaksandr Valialkin
463bc27312
lib/storage: avoid superflouos copy of block header data 2024-02-22 20:21:14 +02:00
Aliaksandr Valialkin
8d9d7a8a12
app/vmstorage: expose vm_snapshots metric, which shows the current number of snapshots
While at it, refresh docs about snapshots - https://docs.victoriametrics.com/#how-to-work-with-snapshots
2024-02-22 18:32:57 +02:00
Aliaksandr Valialkin
aec9cd4316
lib/storage: do not pool rawRowsBlock when flushing rawRows to in-memory blocks
The pooled rawRowsBlock objects occupies big amounts of memory between flushes,
and the flushes are relatively rare. So it is better to don't use the pool
and to allocate rawRow blocks on demand. This should reduce the average
memory usage between flushes.
2024-02-22 17:37:48 +02:00
Aliaksandr Valialkin
b7dfe9894c
lib/storage: do not keep rawRows buffer across flush() calls
The buffer can be quite big under high ingestion rate (e.g. more than 100MB).
This leads to increased memory usage between buffer flushes.
So it is better to re-create the buffer on every flush in order to reduce memory usage
between buffer flushes.
2024-02-22 17:22:26 +02:00
Alexander Marshalov
ce3ec3ff2e
[lib/httputils] fixed floating-point error when parsing time in RFC3339 format (#5814)
* [lib/promutils, lib/httputils] fixed floating-point error when parsing time in RFC3339 format (#5801)

* fixed tests

* fixed test

* Revert "fixed test"

This reverts commit 8a29764806.

* Revert "fixed tests"

This reverts commit 9ce13d1042.

* Revert "[lib/promutils, lib/httputils] fixed floating-point error when parsing time in RFC3339 format (#5801)"

This reverts commit a7a04bd4

* [lib/httputils] fixed floating-point error when parsing time in RFC3339 format (#5801)

---------

Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-02-22 10:20:54 +01:00
Aliaksandr Valialkin
0514091948
app/vlselect: follow-up for 451d2abf50
- Consistently return the first `limit` log entries if the total size of found log entries doesn't exceed 1Mb.
  See app/vlselect/logsql/sort_writer.go . Previously random log entries could be returned with each request.
- Document the change at docs/VictoriaLogs/CHANGELOG.md
- Document the `limit` query arg at docs/VictoriaLogs/querying/README.md
- Make the change less intrusive.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5674
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5778
2024-02-18 23:05:51 +02:00
Dmytro Kozlov
451d2abf50
Enable the limit query param for the /select/logsql/query (#5778)
* app/vlselect: add limit for logs query

* app/vlselect: CHANGELOG.md

* app/vlselect: stop search process if limit is reached, update logic, remove default limit

* app/vlselect: fix tests

* app/vlselect: fix filter tests

* app/vlselect: fix tests
2024-02-18 22:58:47 +02:00
Aliaksandr Valialkin
c42ddce159
lib/promscrape: add support for enable_compression option in the same way as Prometheus does
Updates https://github.com/prometheus/prometheus/pull/13166
Updates https://github.com/prometheus/prometheus/issues/12319

Do not document enable_compression option at docs/sd_configs.md, since vmagent already supports
more clear disable_compression option - see https://docs.victoriametrics.com/vmagent/#scrape_config-enhancements
2024-02-18 19:40:39 +02:00
Aliaksandr Valialkin
5a092e161c
lib/promscrape/discovery/kuma: add support for client_id option
See https://github.com/prometheus/prometheus/pull/13278
2024-02-18 19:19:40 +02:00
Aliaksandr Valialkin
d03719e72d
docs/CHANGELOG.md: document f8207e33a2 2024-02-17 17:52:53 +02:00
Alexander Marshalov
f8207e33a2
lib/httputils: fixed error message for getting zero duration (#5795) (#5812) 2024-02-16 15:24:59 +01:00
Aliaksandr Valialkin
6b9bedd0f9
app/vmstorage: expose vm_last_partition_parts metrics, which may help identifying performance issues related to the increased number of parts in the last partition 2024-02-15 14:51:19 +02:00
Aliaksandr Valialkin
eb1505ba14
lib/uint64set: go fmt after c0a9b87f46 2024-02-15 14:50:44 +02:00
Aliaksandr Valialkin
c0a9b87f46
lib/mergeset: optimize Set.AddMulti() a bit for len(items) < 10000
This should improve the search speed for time series matching the given label filters
2024-02-15 14:30:15 +02:00
Aliaksandr Valialkin
cdac04997a
lib/uint64set: benchmark AddMulti on small number of items, since this case is the most frequent in lib/storage 2024-02-15 14:28:36 +02:00
Aliaksandr Valialkin
baaa88001e
lib/promrelabel: store the original labels before returning them them to promutils.PutLabels()
This should reduce memory allocations.

This is a follow-up for b09bd6c42a

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5389
2024-02-14 16:09:10 +02:00
Aliaksandr Valialkin
b09bd6c42a
lib/promrelabel: factor out applyInternal code into ApplyDebug and Apply functions
This improves readability and maintanability

Also remove memory allocation from SortLabels()
2024-02-14 14:27:08 +02:00
Aliaksandr Valialkin
b564729d75
lib/promscrape: avoid copying labels when -promscrape.dropOriginalLabels command-line flag is set
This should save some CPU

This regression has been introduced in 487f6380d0
when working on https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5389
2024-02-14 03:25:36 +02:00
Aliaksandr Valialkin
95222b2079
all: upgrade Go builder from Go1.21.7 to Go1.22.0
See https://go.dev/doc/go1.22
2024-02-12 21:59:51 +02:00
Aliaksandr Valialkin
a49a50701a
lib/mergeset: do not panic on too long items passed to Table.AddItems()
Instead, log a sample of these long items once per 5 seconds into error log,
so users could notice and fix the issue with too long labels or too many labels.

Previously this panic could occur in production when ingesting samples with too long labels.
2024-02-12 19:32:18 +02:00
Aliaksandr Valialkin
5d69ba630e
lib/mergeset: properly record the firstItem in metaindexRow at blockStreamWriter.WriteBlock
The 3c246cdf00 added an optimization where the previous metaindexRow
could be saved to disk when the current block header couldn't be added indexBlock because the resulting
indexBlock size became too big. This could result in an empty metaindexRow.firstItem for the next metaindexRow.
2024-02-12 18:06:50 +02:00
Aliaksandr Valialkin
2eb967231b
lib/storage: do not append headerData to bsw.indexData if its size exceeds maxBlockSize
This is a follow-up optimization after 3c246cdf00
2024-02-12 18:04:37 +02:00
Aliaksandr Valialkin
39e0007e14
lib/snapshot: move Time, Validate and NewName into lib/snapshot/snapshotutil package
This allows removing importing unneeded command-line flags into binaries, which import lib/storage,
which, in turn, was importing lib/snapshot in order to use Time, Validate and NewName functions.

This is a follow-up for 83e55456e2

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5738
2024-02-09 04:18:45 +02:00
Aliaksandr Valialkin
ae8a867924
all: add support for specifying multiple -httpListenAddr options 2024-02-09 03:15:04 +02:00
Aliaksandr Valialkin
d8c1db7953
lib/httpserver: do not close client connections every 2 minutes by default
Closing client connections every 2 minutes doesn't help load balancing -
this just leads to "jumpy" connections between multiple backend servers,
e.g. the load isn't spread evenly among backend servers, and instead jumps
between the servers every 2 minutes.

It is still possible periodically closing client connections by specifying non-zero -http.connTimeout command-line flag.

This should help with https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1304#issuecomment-1636997037

This is a follow-up for d387da142e
2024-02-08 21:10:25 +02:00
Khushi Jain
83e55456e2
app/vmbackup: support client-side TLS configuration for create/delete snapshot API (#5738) 2024-02-08 15:52:00 +01:00
Aliaksandr Valialkin
0f5176380b
lib/mergeset: add a test for too long item passed to Table.AddItems() 2024-02-08 14:12:56 +02:00
Aliaksandr Valialkin
5817e4c0c1
lib/mergeset: typo fix: indexdb/indexBlock -> indexdb/indexBlocks 2024-02-08 13:53:18 +02:00
Aliaksandr Valialkin
3c246cdf00
lib/{storage,mergeset}: do not create index blocks with sizes exceeding 64Kb in common case
This should reduce memory fragmentation and memory usage for indexdb/indexBlocks and storage/indexBlocks caches
2024-02-08 13:52:24 +02:00
Aliaksandr Valialkin
93ada2eaaf
lib/mergeset: verify that the index block for in-memory part doesnt exceed the 3*maxIndexBlockSize 2024-02-08 13:50:14 +02:00
Aliaksandr Valialkin
077f84964a
lib/mergeset: do not store commonPrefix in blockHeader if the block contains only a single item
There is no sense in storing commonPrefix for blockHeader containing only a single item,
since this only increases blockHeader size without any benefits.
2024-02-08 13:47:24 +02:00
Aliaksandr Valialkin
e1bf8440eb
lib/mergeset: prevent from possible too big indexBlockSize panic
This panic could occur when samples with too long label values are ingested into VictoriaMetrics.
This could result in too long fistItem and commonPrefix values at blockHeader (up to 64kb each).
This may inflate the maximum index block size by 4 * maxIndexBlockSize.
2024-02-08 12:54:10 +02:00
Aliaksandr Valialkin
bf9ea249a3
lib/protoparser/datadogsketches: use math.RoundToEven() for calculating the rank
The original code uses this function - see 48d52eeea6/pkg/quantile/sparse.go (L138)

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5775
2024-02-07 21:43:26 +02:00
Aliaksandr Valialkin
093798375e
lib/protoparser/datadogsketches: add more permalinks to the original source code
These permalinks should help verifying the correctness of the code

This is a follow-up after 07213f4e0c

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5775
2024-02-07 21:33:20 +02:00
Andrii Chubatiuk
07213f4e0c
added ddsketch permalink (#5775)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-02-07 19:01:09 +00:00
Aliaksandr Valialkin
e78e5ccfaa
docs/CHANGELOG.md: support empty command-line flag values in short array notation
For example, -fooDuration=',10s,' is now supported - it sets three command-line flag values:

- the first and the last one are set to the default value for `-fooDuration`
- the second one is set to 10s
2024-02-07 20:53:13 +02:00
Aliaksandr Valialkin
541b644d3d
app/{vmagent,vminsert}: follow-up after a1d1ccd6f2
- Document the change at docs/CHANGELOG.md
- Copy changes from docs/Single-server-VictoriaMetrics.md to README.md
- Add missing handler for processing multitenant requests ( https://docs.victoriametrics.com/vmagent/#multitenancy )
- Substitute github.com/stretchr/testify dependency with 3 lines of code in the added tests
- Comment unclear code at lib/protoparser/datadogsketches/parser.go , so @AndrewChubatiuk could update it
  and add permalinks to the original source code there.
- Various code cleanups

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5584
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3091
2024-02-07 01:28:05 +02:00
Andrii Chubatiuk
a1d1ccd6f2
support datadog /api/beta/sketches API (#5584)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-02-06 20:58:11 +00:00
Aliaksandr Valialkin
9e8e4cca6a
lib/storage: move fixupTimestamps() call to Block.Init()
This is a follow-up for 0bf7921721
2024-02-06 22:43:48 +02:00
Zakhar Bessarab
0bf7921721
lib/storage/raw_row: properly initialize TS for tmp blocks (#5762)
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2024-02-06 20:42:32 +00:00
Aliaksandr Valialkin
e159cc30df
lib/fs: lazily open the file at ReaderAt on the first access
This should significantly reduce the number of open ReaderAt files
on VictoriaMetrics and VictoriaLogs startup.

The open files can be tracked via vm_fs_readers metric
2024-02-06 20:42:57 +02:00
Aliaksandr Valialkin
7bc3af1224
lib/httpserver: add support for mTLS for requests to -httpListenAddr 2024-02-06 17:46:19 +02:00
Aliaksandr Valialkin
de9a9546c2
lib/cgroup: remove SetGOGC() function
GOGC can be already set via environment variable. There is no need in adding
new approaches for setting the GOGC (such as command-line flag), since they complicate operations.
2024-02-05 12:11:08 +02:00
Aliaksandr Valialkin
a40fcc8aa6
lib/prompbmarshal: code cleanup after 8aaa828ba3 2024-02-01 21:40:16 +02:00
Aliaksandr Valialkin
0e3c532bf7
app/vmselect/netstorage: prevent from disk write IO when closing temporary files
Remove temporary file before closing it in order to signal the OS that it shouldn't
store the file contents from page cache to disk when the file is closed.

Gracefully handle the case when the file cannot be removed before being closed -
in this case remove the file after closing it. This allows working on Windows.

Also remove superflouos opening of temporary file for reading - re-use already opened file handle for writing.

This is a follow-up for 9b1e002287
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4020
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/70
2024-02-01 19:12:44 +02:00
Dima Lazerka
49d5e7fef5
Improve docs on security http headers (#5262)
* Improve docs on security http headers

* Apply suggestions from code review

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-02-01 12:40:11 +00:00
noodles2hg
cafd6f08b3
lib/logstorage: proper exit during block search (#5400) 2024-02-01 12:11:05 +00:00
Jiajing LU
333bda8702
count inmemoryParts that have not been taken for merge (#5447) 2024-02-01 12:06:28 +00:00
Aliaksandr Valialkin
8aaa828ba3
lib/prompbmarshal: return back custom protobuf marshaler for lib/prompbmarshal.WriteRequest
The easyproto-based marshaler is 2x slower than the previous custom marshaler,
so let's stick with it. This improves the performance for sending data to remote storage at vmagent
and reduces CPU usage to pre-v1.97.0 levels.
2024-02-01 06:33:06 +02:00
Aliaksandr Valialkin
55b5c13839
lib/encoding: follow-up for 49e3665d6d
Improve performance for typical cases of varint marshaling / unmarshaling further.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5721
2024-02-01 05:37:40 +02:00
Fuchun Zhang
49e3665d6d
make encoding.MarshalVarInt64s faster (#5721)
* make encoding.MarshalVarInt64s faster

* add fast path for MarshalVarInt64s

* make UnmarshalVarUint64s faster

* remove comment
2024-02-01 05:34:37 +02:00
Aliaksandr Valialkin
c91614b626
lib/encoding: added benchmarks for marshaling / unmarshaling of varints
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5721
2024-02-01 05:11:01 +02:00
helen
c8a96ac241
clean unused code (#5735)
Signed-off-by: helen <haitao.zhang@daocloud.io>
2024-01-31 17:50:36 +00:00
Aliaksandr Valialkin
b7fd7ee0b6
lib/promauth: follow-up for fca3b14b7b
- Simplify the code for handling BasicAuthConfig at lib/promauth/config.go
- Move the description of the change into correct place at docs/CHANGELOG.md
- Put tests for username in front of tests for password at lib/promauth/config_test.go

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5720
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5511
2024-01-31 19:45:16 +02:00
Nihal
fca3b14b7b
Support for username_file in scrape config (basic_auth) similar to Prometheus for having config compatibility (#5720)
* adding support for username_file in basic_auth of scrape config

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>

* adding support for username_file in basic_auth of scrape config. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5511

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>

* adding support for username_file in basic_auth of scrape config

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>

* adding support for username_file in basic_auth of scrape config

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>

* adding support for username_file in basic_auth of scrape config

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>

---------

Signed-off-by: Syed Nihal <syed.nihal@nokia.com>
2024-01-31 17:41:16 +00:00
Aliaksandr Valialkin
bc7cf4950b
lib/promscrape: use the standard net/http.Client instead of fasthttp.Client for scraping targets in non-streaming mode
While fasthttp.Client uses less CPU and RAM when scraping targets with small responses (up to 10K metrics),
it doesn't work well when scraping targets with big responses such as kube-state-metrics.
In this case it could use big amounts of additional memory comparing to net/http.Client,
since fasthttp.Client reads the full response in memory and then tries re-using the large buffer
for further scrapes.

Additionally, fasthttp.Client-based scraping had various issues with proxying, redirects
and scrape timeouts like the following ones:

- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1945
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5425
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2794
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017

This should help reducing memory usage for the case when target returns big response
and this response is scraped by fasthttp.Client at first before switching to stream parsing mode
for subsequent scrapes. Now the switch to stream parsing mode is performed on the first scrape
after reading the response body in memory and noticing that its size exceeds the value passed
to -promscrape.minResponseSizeForStreamParse command-line flag.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5567

Overrides https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4931
2024-01-30 18:39:10 +02:00
Aliaksandr Valialkin
c2373a8109
lib/promscrape: fix BenchmarkScrapeWorkScrapeInternal, which has been broken by the commit 65bc460323 2024-01-30 16:06:06 +02:00
Aliaksandr Valialkin
431aa16c8d
lib/storage: keep (date, metricID) entries only for the last two dates
Entries for the previous dates is usually not used, so there is little sense in keeping them in memory.

This should reduce the size of storage/date_metricID cache, which can be monitored
via vm_cache_entries{type="storage/date_metricID"} metric.
2024-01-29 18:43:59 +01:00
Aliaksandr Valialkin
5d66ee88bd
lib/storage: do not check the limit for -search.maxUniqueTimeseries when performing /api/v1/labels and /api/v1/label/.../values requests
This limit has little sense for these APIs, since:

- Thses APIs frequently result in scanning of all the time series on the given time range.
  For example, if extra_filters={datacenter="some_dc"} .

- Users expect these APIs shouldn't hit the -search.maxUniqueTimeseries limit,
  which is intended for limiting resource usage at /api/v1/query and /api/v1/query_range requests.

Also limit the concurrency for /api/v1/labels, /api/v1/label/.../values
and /api/v1/series requests in order to limit the maximum memory usage and CPU usage for these API.
This limit shouldn't affect typical use cases for these APIs:

- Grafana dashboard load when dashboard labels should be loaded
- Auto-suggestion list load when editing the query in Grafana or vmui

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5055
2024-01-29 16:45:12 +01:00
hagen1778
98b805544e
lib/streamaggr: fix incorrect err message for min interval value
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-01-29 09:53:05 +01:00
Aliaksandr Valialkin
0ed291102d
lib/decimal: follow-up for e6bad5174f
- Add a benchmark for CalbirateAndScale.
- Reduce the decimal multipliers table size from 256Kb to 192bytes.
- Use more clear naming for variables.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5672
2024-01-27 00:08:57 +01:00
Fuchun Zhang
64780f4f02
Optimize the performance of data merge: decimal.CalibrateScale() (#5672)
* Optimize the performance of data merge: decimal.CalibrateScale() from 49633 ns/op to 9146 ns/op

* Optimize the performance of data merge: decimal.CalibrateScale()
2024-01-27 00:08:56 +01:00
Hui Wang
6ee1bfeb3c
add inserting comma inside value instruction to flag description (#5666) 2024-01-26 22:46:49 +01:00
Roman Khavronenko
aaa526e8ff
lib/streamaggr: skip unfinished aggregation state on shutdown by default (#5689)
Sending unfinished aggregate states tend to produce unexpected anomalies with lower values than expected.
The old behavior can be restored by specifying `flush_on_shutdown: true` setting in streaming aggregation config

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-01-26 22:45:23 +01:00
Aliaksandr Valialkin
bb7a419cc3
lib/{mergeset,storage}: make background merge more responsive and scalable
- Maintain a separate worker pool per each part type (in-memory, file, big and small).
  Previously a shared pool was used for merging all the part types.
  A single merge worker could merge parts with mixed types at once. For example,
  it could merge simultaneously an in-memory part plus a big file part.
  Such a merge could take hours for big file part. During the duration of this merge
  the in-memory part was pinned in memory and couldn't be persisted to disk
  under the configured -inmemoryDataFlushInterval .

  Another common issue, which could happen when parts with mixed types are merged,
  is uncontrolled growth of in-memory parts or small parts when all the merge workers
  were busy with merging big files. Such growth could lead to significant performance
  degradataion for queries, since every query needs to check ever growing list of parts.
  This could also slow down the registration of new time series, since VictoriaMetrics
  searches for the internal series_id in the indexdb for every new time series.

  The third issue is graceful shutdown duration, which could be very long when a background
  merge is running on in-memory parts plus big file parts. This merge couldn't be interrupted,
  since it merges in-memory parts.

  A separate pool of merge workers per every part type elegantly resolves both issues:
  - In-memory parts are merged to file-based parts in a timely manner, since the maximum
    size of in-memory parts is limited.
  - Long-running merges for big parts do not block merges for in-memory parts and small parts.
  - Graceful shutdown duration is now limited by the time needed for flushing in-memory parts to files.
    Merging for file parts is instantly canceled on graceful shutdown now.

- Deprecate -smallMergeConcurrency command-line flag, since the new background merge algorithm
  should automatically self-tune according to the number of available CPU cores.

- Deprecate -finalMergeDelay command-line flag, since it wasn't working correctly.
  It is better to run forced merge when needed - https://docs.victoriametrics.com/#forced-merge

- Tune the number of shards for pending rows and items before the data goes to in-memory parts
  and becomes visible for search. This improves the maximum data ingestion rate and the maximum rate
  for registration of new time series. This should reduce the duration of data ingestion slowdown
  in VictoriaMetrics cluster on e.g. re-routing events, when some of vmstorage nodes become temporarily
  unavailable.

- Prevent from possible "sync: WaitGroup misuse" panic on graceful shutdown.

This is a follow-up for fa566c68a6 .
Thanks @misutoth to for the inspiration at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5212

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5190
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3790
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3551
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3337
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3425
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3647
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3641
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/648
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/291
2024-01-26 22:27:47 +01:00
Aliaksandr Valialkin
e84c877503
lib/mergeset: remove inmemoryBlock pooling, since it wasn't effecitve
This should reduce memory usage a bit when new time series are ingested at high rate (aka high churn rate)
2024-01-26 21:34:57 +01:00
Aliaksandr Valialkin
2655c02d5e
lib/logstorage: make sure that WaitGroup.Add isnt called after stopCh is closed and WaitGroup.Wait is called
This protects from rare panic, which may occur during graceful shutdown of VictoriaLogs
2024-01-26 21:17:02 +01:00
Aliaksandr Valialkin
c3a585cfe5
lib/storage: rename *AssistedMerges to *AssistedMergesCount in order to make these field names less misleading
These fields are counters, not gauges, so adding Count suffix to them makes easier to understand this while reading the code
2024-01-25 10:19:32 +02:00
Aliaksandr Valialkin
18df07e824
lib/mergeset: start assisted merge for file parts only if the number of file parts is bigger than maxFileParts
The maxFileParts usage has been accidentally removed in fa566c68a6

While at it, add Count suffix to *AssistedMerges counter names in order to make them less misleading.
Previously their names were falsely suggesting that these are gauges, which show the number of concurrently
executed assisted merges.
2024-01-24 15:08:42 +02:00
Aliaksandr Valialkin
ac5b740750
lib/promscrape/discovery/kubernetes: typo fix in the comment for ContainerStateTerminated struct
This is a follow-up for ef12598ad4
2024-01-24 15:06:46 +02:00
Aliaksandr Valialkin
ef12598ad4
lib/promscrape/discovery/kubernetes: do not generate targets for already terminated pods and containers
Already terminated pods and containers cannot be scraped and will never resurrect,
so there is zero sense in creating scrape targets for them.
2024-01-24 14:57:53 +02:00
Aliaksandr Valialkin
f888a019fe
lib/streamaggr: expand %{ENV} placeholders in stream aggregation configs 2024-01-24 12:31:27 +02:00
Aliaksandr Valialkin
fa566c68a6
lib/mergeset: really limit the number of in-memory parts to 15
It has been appeared that the registration of new time series slows down linearly
with the number of indexdb parts, since VictoriaMetrics needs to check every indexdb part
when it searches for TSID by newly ingested metric name.

The number of in-memory parts grows when new time series are registered
at high rate. The number of in-memory parts grows faster on systems with big number
of CPU cores, because the mergeset maintains per-CPU buffers with newly added entries
for the indexdb, and every such entry is transformed eventually into a separate in-memory part.

The solution has been suggested in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5212
by @misutoth - to limit the number of in-memory parts with buffered channel.
This solution is implemented in this commit. Additionally, this commit merges per-CPU parts
into a single part before adding it to the list of in-memory parts. This reduces CPU load
when searching for TSID by newly ingested metric name.

The https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5212 recommends setting the limit on the number
of in-memory parts to 100, but my internal testing shows that much lower limit 15 works with the same efficiency
on a system with 16 CPU cores while reducing memory usage for `indexdb/dataBlocks` cache by up to 50%.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5190
2024-01-24 03:38:12 +02:00
Aliaksandr Valialkin
8fb8b71295
lib/encoding: remove uneeded re-slicing of byte slice before passing it to binary.BigEndian.Uint* 2024-01-23 22:50:29 +02:00
Aliaksandr Valialkin
ae643ef1f1
lib/{storage,mergeset}: reduce the maxium compression level for the stored data
This reduces CPU usage a bit, while doesn't increase resulting file sizes according to synthetic tests.
2024-01-23 17:46:50 +02:00
Aliaksandr Valialkin
6c214397ed
lib/storage: compress metricIDs, which match the given filters, before storing them in tagFiltersToMetricIDsCache
This allows reducing the indexdb/tagFiltersToMetricIDs cache size by 8 on average.
The cache size can be checked via vm_cache_size_bytes{type="indexdb/tagFiltersToMetricIDs"} metric exposed at /metrics page.
2024-01-23 16:09:55 +02:00
Aliaksandr Valialkin
4d78954158
lib/storage: do not sort metricIDs passed to Storage.prefetchMetricNames, since the caller is responsible for the sorting 2024-01-23 16:08:38 +02:00
Aliaksandr Valialkin
6d84b1beef
lib/filestream: do not measure read / write duration from / to in-memory buffers
Measuring read / write duration from / to in-memory buffers has little sense,
since it will be always fast. It is better to measure read / write duration from / to
real files at vm_filestream_write_duration_seconds_total and vm_filestream_read_duration_seconds_total metrics.
This also reduces overhead on time.Now() and Histogram.UpdateDuration() calls
per each filestream.Reader.Read() and filestream.Writer.Write() call when the data is read / written from / to in-memory buffers.

This is a follow-up for 2f63dec2e3
2024-01-23 14:52:22 +02:00
Roman Khavronenko
89e3c70ccd
lib/promscrape: respect 0 value for series_limit param (#5663)
* lib/promscrape: respect `0` value for `series_limit` param

Respect `0` value for `series_limit` param in `scrape_config`
even if global limit was set via `-promscrape.seriesLimitPerTarget`.
Previously, `0` value will be ignored in favor of `-promscrape.seriesLimitPerTarget`.

This behavior aligns with possibility to override `series_limit` value via
relabeling with `__series_limit__` label.

Signed-off-by: hagen1778 <roman@victoriametrics.com>

* Update docs/CHANGELOG.md

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-01-23 13:09:14 +02:00
Aliaksandr Valialkin
1c5163ae51
lib/mergeset: make sure that the first and the last items are in the original range after prepareBlock()
Previously the checks were to strict by requiring to leave the same first and last items by prepareBlock()

Thanks to @ahfuzhang for the suggestion at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5655
2024-01-23 12:58:32 +02:00
Aliaksandr Valialkin
980338861f
lib/mergeset: skip comparison for every item in the block during merge if the last item in the block is smaller than the first item in the next block
Thanks to @ahfuzhang for the suggestion at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5651
2024-01-23 03:15:52 +02:00
Aliaksandr Valialkin
3449d563bd
all: add up to 10% random jitter to the interval between periodic tasks performed by various components
This should smooth CPU and RAM usage spikes related to these periodic tasks,
by reducing the probability that multiple concurrent periodic tasks are performed at the same time.
2024-01-22 18:40:32 +02:00
Aliaksandr Valialkin
9b4294e53e
lib/storage: reduce the contention on dateMetricIDCache mutex when new time series are registered at high rate
The dateMetricIDCache puts recently registered (date, metricID) entries into mutable cache protected by the mutex.
The dateMetricIDCache.Has() checks for the entry in the mutable cache when it isn't found in the immutable cache.
Access to the mutable cache is protected by the mutex. This means this access is slow on systems with many CPU cores.
The mutabe cache was merged into immutable cache every 10 seconds in order to avoid slow access to mutable cache.
This means that ingestion of new time series to VictoriaMetrics could result in significant slowdown for up to 10 seconds
because of bottleneck at the mutex.

Fix this by merging the mutable cache into immutable cache after len(cacheItems) / 2
cache hits under the mutex, e.g. when the entry is found in the mutable cache.
This should automatically adjust intervals between merges depending on the addition rate
for new time series (aka churn rate):

- The interval will be much smaller than 10 seconds under high churn rate.
  This should reduce the mutex contention for mutable cache.
- The interval will be bigger than 10 seconds under low churn rate.
  This should reduce the uneeded work on merging of mutable cache into immutable cache.
2024-01-22 18:40:32 +02:00
Aliaksandr Valialkin
d3ee3e0ef5
Revert "lib/promscrape: do not store last scrape response when stale markers … (#5577)"
This reverts commit cfec258803.

Reason for revert: the original code already doesn't store the last scrape response when stale markers are disabled.
The scrapeWork.areIdenticalSeries() function always returns true is stale markers are disabled.
This prevents from storing the last response at scrapeWork.processScrapedData().

It looks like the reverted commit could also return back the issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3660

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5577
2024-01-22 00:43:48 +02:00
Aliaksandr Valialkin
1c7f990fad
app/vmselect: handle negative time range start in a generic manner inside NewSearchQuery()
This is a follow-up for cf03e11d89

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5553
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5630
2024-01-21 23:45:31 +02:00
Hui Wang
4e3242b02d
lib/promscrape/discovery/kubernetes: fix watcher start order for roles endpoints and endpointslice (#5557)
* lib/promscrape/discovery/kubernetes: fix watcher start order for roles endpoints and endpointslice

Previously the groupWatcher could be mistakenly stopped when requests for pod or services resources take too long.

* remove mislead comment

* docs/sd_configs.md: mention -promscrape.kubernetes.attachNodeMetadataAll flag in the description for attach_metadata section

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4640

* wip

* lib/promscrape/kubernetes: prevent from stopping groupWatcher when there are in-flight apiWatcher.mustStart() calls

groupWatcher is stopped if it has zero registered apiWatchers during 14 seconds.
But such a groupWatcher can be still in use if apiWatcher for `role: endpoints` or `role: endpointslice`
is being registered and the discovery of the associated `pod` and/or `service` objects takes longer
than 14 seconds - see the beginning of groupWatcher.startWatchersForRole() function for details.

Track the number of in-flight calls to apiWatcher.mustStart() and prevent from stopping the associated groupWatcher
if the number of in-flight calls is non-zero.

P.S. postponing the discovery of `pod` and/or `service` objects associated with `endpoints` or `endpointslice` roles
isn't the best solution, since it slows down initial discovery of `endpoints` and `endpointslice` targets.

* typo fix

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-01-21 23:13:15 +02:00
Aliaksandr Valialkin
1f105dde98
all: allow dynamically reading *AuthKey flag values from files and urls
Examples:

1) -metricsAuthKey=file:///abs/path/to/file - reads flag value from the given absolute filepath
2) -metricsAuthKey=file://./relative/path/to/file - reads flag value from the given relative filepath
3) -metricsAuthKey=http://some-host/some/path?query_arg=abc - reads flag value from the given url

The flag value is automatically updated when the file contents changes.
2024-01-21 22:03:38 +02:00
Aliaksandr Valialkin
0b2ea1a7c7
all: call atomic.Load* in front of atomic.CompareAndSwap* at places where the atomic.CompareAndSwap* returns false most of the time
This allows avoiding slow inter-CPU synchornization induced by atomic.CompareAndSwap*
2024-01-21 14:04:54 +02:00
Aliaksandr Valialkin
4eb9926125
lib/promscrape: code cleanup: send stale markers immediately after generating automatic metrics
This cleanup has been extracted from https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5557/files#diff-6b205cf6637d7b65a5c45d9417d08822d4efad94227268cb196f61aa2a0fc0f7
2024-01-21 05:18:22 +02:00
Aliaksandr Valialkin
12f2c5679b
all: consistently clear prompbmarshal.Label by assigning an empty struct instead of zeroing Name and Value individually 2024-01-21 05:11:05 +02:00
Aliaksandr Valialkin
90768aa418
lib/storage/partition.go: remove misleading comment, which falsely states that inmemoryParts isn't visible to search
Thanks to @satjd for raising attention to this comment at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5410
2024-01-21 04:49:35 +02:00
Aliaksandr Valialkin
7fba73ce11
lib/promscrape/discovery/kubernetes: add -promscrape.kubernetes.attachNodeMetadataAll command-line flag
This flag allows setting attach_metadata.node=true for all the kubernetes_sd_configs defined at -promscrape.config

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4640

Thanks to wasim-nihal for the initial implementation at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5593
2024-01-21 03:13:56 +02:00
Nikolay
8ab0ce3ded
app/vmselect: abort streaming connections for vmselect (#5650)
* app/vmselect: abort streaming connections for vmselect
due to streaming nature of export APIs, curl and simmilr tools cannot
detect errors that happened after http.Header with status 200 was
written to it.

This PR tracks if body write was already started and closes connection.

It allows client to detect not expected chunk sequence and return error
to the caller.

Mostly it affects vmselect at cluster version

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5645

* wip

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5645
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5650

---------

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-01-21 02:12:51 +02:00
Aliaksandr Valialkin
74448a7e57
lib/promscrape/discovery/hetzner: follow-up after 03a97dc678
- docs/sd_configs.md: moved hetzner_sd_configs docs to the correct place according to alphabetical order of SD names,
  document missing __meta_hetzner_role label.
- lib/promscrape/config.go: added missing MustStop() call for Hetzner SD,
  and moved the code to the correct place according to alphabetical order of SD names.
- lib/promscrape/discovery/hetzner: properly handle pagination for hloud API responses,
  populate missing __meta_hetzner_role label like Prometheus does.
- Properly populate __meta_hetzner_public_ipv6_network label like Prometheus does.
- Remove unused SDConfig.Token.
- Remove "omitempty" annotation from SDConfig.Role field, since this field is mandatory.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5550
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3154
2024-01-20 17:01:53 +02:00
Hui Wang
cfec258803
lib/promscrape: do not store last scrape response when stale markers … (#5577)
* lib/promscrape: do not store last scrape response when stale markers are disabled

* update changelog
2024-01-20 00:53:41 +08:00
Aliaksandr Valialkin
5bdf62de5b
lib/storage: do not prefetch metric names for small number of metricIDs
This eliminates prefetchedMetricIDsLock lock contention for queries, which return less than 500 time series.

This is a follow-up for 9d886a2eb0
2024-01-17 13:48:15 +02:00
Aliaksandr Valialkin
41932db848
lib/promscrape: cosmetic changes after 3ac44baebe
- Rename mustLoadScrapeConfigFiles() to loadScrapeConfigFiles(), since now it may return error.
- Split too long line with the error message into two lines in order to improve readability a bit.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5508
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5560
2024-01-16 22:29:09 +02:00
Aliaksandr Valialkin
4073bb3303
lib/httputils: handle step=undefined query arg as an empty value
This is needed for Grafana, which may send step=undefined
when working with alerting rules and instant queries.
2024-01-16 18:59:26 +02:00