From 04a05f161c91a6d572de7c67de0172880d2da7ec Mon Sep 17 00:00:00 2001 From: Aliaksandr Valialkin Date: Mon, 10 Oct 2022 21:43:36 +0300 Subject: [PATCH] app/vmselect: return back the logic for limits the amounts of memory occupied by concurrently executed queries if -search.maxMemoryPerQuery isn't set This is needed for preserving backwards compatibility with the previous releases of VictoriaMetrics. Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3203 --- README.md | 4 +- app/vmselect/main.go | 1 - app/vmselect/promql/eval.go | 48 +++++++++++-------- app/vmselect/promql/memory_limiter.go | 33 +++++++++++++ app/vmselect/promql/memory_limiter_test.go | 56 ++++++++++++++++++++++ docs/Cluster-VictoriaMetrics.md | 4 +- docs/README.md | 4 +- docs/Single-server-VictoriaMetrics.md | 4 +- 8 files changed, 125 insertions(+), 29 deletions(-) create mode 100644 app/vmselect/promql/memory_limiter.go create mode 100644 app/vmselect/promql/memory_limiter_test.go diff --git a/README.md b/README.md index 1211bfad3..06d28d0a6 100644 --- a/README.md +++ b/README.md @@ -1264,7 +1264,7 @@ See also [resource usage limits docs](#resource-usage-limits). By default VictoriaMetrics is tuned for an optimal resource usage under typical workloads. Some workloads may need fine-grained resource usage limits. In these cases the following command-line flags may be useful: - `-memory.allowedPercent` and `-memory.allowedBytes` limit the amounts of memory, which may be used for various internal caches at VictoriaMetrics. Note that VictoriaMetrics may use more memory, since these flags don't limit additional memory, which may be needed on a per-query basis. -- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. By default this limit is calculated by dividing `-search.allowedPercent` by `-search.maxConcurrentRequests`. Sometimes a heavy query, which selects big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. +- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. Heavy queries, which select big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. - `-search.maxUniqueTimeseries` limits the number of unique time series a single query can find and process. VictoriaMetrics keeps in memory some metainformation about the time series located by each query and spends some CPU time for processing the found time series. This means that the maximum memory usage and CPU usage a single query can use is proportional to `-search.maxUniqueTimeseries`. - `-search.maxQueryDuration` limits the duration of a single query. If the query takes longer than the given duration, then it is canceled. This allows saving CPU and RAM when executing unexpected heavy queries. - `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries. See also `-search.maxMemoryPerQuery` command-line flag. @@ -2238,7 +2238,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li -search.maxLookback duration Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons -search.maxMemoryPerQuery size - The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . If the -search.maxMemoryPerQuery isn't set, then it is automatically calculated by dividing -memory.allowedPercent by -search.maxConcurrentRequests + The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -search.maxPointsPerTimeseries int The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000) diff --git a/app/vmselect/main.go b/app/vmselect/main.go index b991f98b2..51ae8f9ca 100644 --- a/app/vmselect/main.go +++ b/app/vmselect/main.go @@ -60,7 +60,6 @@ func Init() { fs.RemoveDirContents(tmpDirPath) netstorage.InitTmpBlocksDir(tmpDirPath) promql.InitRollupResultCache(*vmstorage.DataPath + "/cache/rollupResult") - promql.InitMaxMemoryPerQuery(*maxConcurrentRequests) concurrencyCh = make(chan struct{}, *maxConcurrentRequests) initVMAlertProxy() diff --git a/app/vmselect/promql/eval.go b/app/vmselect/promql/eval.go index 137e8fd0d..9ca5de904 100644 --- a/app/vmselect/promql/eval.go +++ b/app/vmselect/promql/eval.go @@ -29,9 +29,8 @@ var ( maxPointsSubqueryPerTimeseries = flag.Int("search.maxPointsSubqueryPerTimeseries", 100e3, "The maximum number of points per series, which can be generated by subquery. "+ "See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3") maxMemoryPerQuery = flagutil.NewBytes("search.maxMemoryPerQuery", 0, "The maximum amounts of memory a single query may consume. "+ - "Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as "+ - "-search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . "+ - "If the -search.maxMemoryPerQuery isn't set, then it is automatically calculated by dividing -memory.allowedPercent by -search.maxConcurrentRequests") + "Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated "+ + "as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests") noStaleMarkers = flag.Bool("search.noStaleMarkers", false, "Set this flag to true if the database doesn't contain Prometheus stale markers, "+ "so there is no need in spending additional CPU time on its handling. Staleness markers may exist only in data obtained from Prometheus scrape targets") ) @@ -1058,17 +1057,29 @@ func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcNa } rollupPoints := mulNoOverflow(pointsPerTimeseries, int64(timeseriesLen*len(rcs))) rollupMemorySize = sumNoOverflow(mulNoOverflow(int64(rssLen), 1000), mulNoOverflow(rollupPoints, 16)) - maxMemory := getMaxMemoryPerQuery() - if rollupMemorySize > maxMemory { + if rollupMemorySize > int64(maxMemoryPerQuery.N) { rss.Cancel() return nil, &UserReadableError{ Err: fmt.Errorf("not enough memory for processing %d data points across %d time series with %d points in each time series "+ "according to -search.maxMemoryPerQuery=%d; requested memory: %d bytes; "+ - "possible solutions are: reducing the number of matching time series; increasing -search.maxMemoryPerQuery; "+ - "increasing `step` query arg (%gs)", - rollupPoints, timeseriesLen*len(rcs), pointsPerTimeseries, maxMemory, rollupMemorySize, float64(ec.Step)/1e3), + "possible solutions are: reducing the number of matching time series; increasing `step` query arg (step=%gs); "+ + "increasing -search.maxMemoryPerQuery", + rollupPoints, timeseriesLen*len(rcs), pointsPerTimeseries, maxMemoryPerQuery.N, rollupMemorySize, float64(ec.Step)/1e3), } } + rml := getRollupMemoryLimiter() + if !rml.Get(uint64(rollupMemorySize)) { + rss.Cancel() + return nil, &UserReadableError{ + Err: fmt.Errorf("not enough memory for processing %d data points across %d time series with %d points in each time series; "+ + "total available memory for concurrent requests: %d bytes; "+ + "requested memory: %d bytes; "+ + "possible solutions are: reducing the number of matching time series; increasing `step` query arg (step=%gs); "+ + "switching to node with more RAM; increasing -memory.allowedPercent", + rollupPoints, timeseriesLen*len(rcs), pointsPerTimeseries, rml.MaxSize, uint64(rollupMemorySize), float64(ec.Step)/1e3), + } + } + defer rml.Put(uint64(rollupMemorySize)) // Evaluate rollup keepMetricNames := getKeepMetricNames(expr) @@ -1088,21 +1099,18 @@ func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcNa return tss, nil } -func getMaxMemoryPerQuery() int64 { - if n := maxMemoryPerQuery.N; n > 0 { - return int64(n) - } - return maxMemoryPerQueryDefault -} +var ( + rollupMemoryLimiter memoryLimiter + rollupMemoryLimiterOnce sync.Once +) -// InitMaxMemoryPerQuery must be called after flag.Parse and before promql usage. -func InitMaxMemoryPerQuery(maxConcurrentRequests int) { - n := int(0.8*float64(memory.Allowed())) / maxConcurrentRequests - maxMemoryPerQueryDefault = int64(n) +func getRollupMemoryLimiter() *memoryLimiter { + rollupMemoryLimiterOnce.Do(func() { + rollupMemoryLimiter.MaxSize = uint64(memory.Allowed()) / 4 + }) + return &rollupMemoryLimiter } -var maxMemoryPerQueryDefault int64 - func evalRollupWithIncrementalAggregate(qt *querytracer.Tracer, funcName string, keepMetricNames bool, iafc *incrementalAggrFuncContext, rss *netstorage.Results, rcs []*rollupConfig, preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64) ([]*timeseries, error) { diff --git a/app/vmselect/promql/memory_limiter.go b/app/vmselect/promql/memory_limiter.go new file mode 100644 index 000000000..e9a76b143 --- /dev/null +++ b/app/vmselect/promql/memory_limiter.go @@ -0,0 +1,33 @@ +package promql + +import ( + "sync" + + "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" +) + +type memoryLimiter struct { + MaxSize uint64 + + mu sync.Mutex + usage uint64 +} + +func (ml *memoryLimiter) Get(n uint64) bool { + ml.mu.Lock() + ok := n <= ml.MaxSize && ml.MaxSize-n >= ml.usage + if ok { + ml.usage += n + } + ml.mu.Unlock() + return ok +} + +func (ml *memoryLimiter) Put(n uint64) { + ml.mu.Lock() + if n > ml.usage { + logger.Panicf("BUG: n=%d cannot exceed %d", n, ml.usage) + } + ml.usage -= n + ml.mu.Unlock() +} diff --git a/app/vmselect/promql/memory_limiter_test.go b/app/vmselect/promql/memory_limiter_test.go new file mode 100644 index 000000000..4477678e4 --- /dev/null +++ b/app/vmselect/promql/memory_limiter_test.go @@ -0,0 +1,56 @@ +package promql + +import ( + "testing" +) + +func TestMemoryLimiter(t *testing.T) { + var ml memoryLimiter + ml.MaxSize = 100 + + // Allocate memory + if !ml.Get(10) { + t.Fatalf("cannot get 10 out of %d bytes", ml.MaxSize) + } + if ml.usage != 10 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 10) + } + if !ml.Get(20) { + t.Fatalf("cannot get 20 out of 90 bytes") + } + if ml.usage != 30 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 30) + } + if ml.Get(1000) { + t.Fatalf("unexpected get for 1000 bytes") + } + if ml.usage != 30 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 30) + } + if ml.Get(71) { + t.Fatalf("unexpected get for 71 bytes") + } + if ml.usage != 30 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 30) + } + if !ml.Get(70) { + t.Fatalf("cannot get 70 bytes") + } + if ml.usage != 100 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 100) + } + + // Return memory back + ml.Put(10) + ml.Put(70) + if ml.usage != 20 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 20) + } + if !ml.Get(30) { + t.Fatalf("cannot get 30 bytes") + } + ml.Put(50) + if ml.usage != 0 { + t.Fatalf("unexpected usage; got %d; want %d", ml.usage, 0) + } +} diff --git a/docs/Cluster-VictoriaMetrics.md b/docs/Cluster-VictoriaMetrics.md index a2f3b8fb7..92de6c0eb 100644 --- a/docs/Cluster-VictoriaMetrics.md +++ b/docs/Cluster-VictoriaMetrics.md @@ -470,7 +470,7 @@ See also [resource usage limits docs](#resource-usage-limits). By default cluster components of VictoriaMetrics are tuned for an optimal resource usage under typical workloads. Some workloads may need fine-grained resource usage limits. In these cases the following command-line flags may be useful: - `-memory.allowedPercent` and `-memory.allowedBytes` limit the amounts of memory, which may be used for various internal caches at all the cluster components of VictoriaMetrics - `vminsert`, `vmselect` and `vmstorage`. Note that VictoriaMetrics components may use more memory, since these flags don't limit additional memory, which may be needed on a per-query basis. -- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query at `vmselect` node. Queries, which need more memory, are rejected. By default this limit is calculated by dividing `-search.allowedPercent` by `-search.maxConcurrentRequests`. Sometimes a heavy query, which selects big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. +- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query at `vmselect` node. Queries, which need more memory, are rejected. Heavy queries, which select big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. - `-search.maxUniqueTimeseries` at `vmselect` component limits the number of unique time series a single query can find and process. `vmselect` passes the limit to `vmstorage` component, which keeps in memory some metainformation about the time series located by each query and spends some CPU time for processing the found time series. This means that the maximum memory usage and CPU usage a single query can use at `vmstorage` is proportional to `-search.maxUniqueTimeseries`. - `-search.maxQueryDuration` at `vmselect` limits the duration of a single query. If the query takes longer than the given duration, then it is canceled. This allows saving CPU and RAM at `vmselect` and `vmstorage` when executing unexpected heavy queries. - `-search.maxConcurrentRequests` at `vmselect` limits the number of concurrent requests a single `vmselect` node can process. Bigger number of concurrent requests usually means bigger memory usage at both `vmselect` and `vmstorage`. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. `vmselect` provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries. See also `-search.maxMemoryPerQuery` command-line flag. @@ -958,7 +958,7 @@ Below is the output for `/path/to/vmselect -help`: -search.maxLookback duration Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons -search.maxMemoryPerQuery size - The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . If the -search.maxMemoryPerQuery isn't set, then it is automatically calculated by dividing -memory.allowedPercent by -search.maxConcurrentRequests + The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -search.maxPointsPerTimeseries int The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000) diff --git a/docs/README.md b/docs/README.md index 987702361..edda095e7 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1265,7 +1265,7 @@ See also [resource usage limits docs](#resource-usage-limits). By default VictoriaMetrics is tuned for an optimal resource usage under typical workloads. Some workloads may need fine-grained resource usage limits. In these cases the following command-line flags may be useful: - `-memory.allowedPercent` and `-memory.allowedBytes` limit the amounts of memory, which may be used for various internal caches at VictoriaMetrics. Note that VictoriaMetrics may use more memory, since these flags don't limit additional memory, which may be needed on a per-query basis. -- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. By default this limit is calculated by dividing `-search.allowedPercent` by `-search.maxConcurrentRequests`. Sometimes a heavy query, which selects big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. +- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. Heavy queries, which select big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. - `-search.maxUniqueTimeseries` limits the number of unique time series a single query can find and process. VictoriaMetrics keeps in memory some metainformation about the time series located by each query and spends some CPU time for processing the found time series. This means that the maximum memory usage and CPU usage a single query can use is proportional to `-search.maxUniqueTimeseries`. - `-search.maxQueryDuration` limits the duration of a single query. If the query takes longer than the given duration, then it is canceled. This allows saving CPU and RAM when executing unexpected heavy queries. - `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries. See also `-search.maxMemoryPerQuery` command-line flag. @@ -2239,7 +2239,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li -search.maxLookback duration Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons -search.maxMemoryPerQuery size - The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . If the -search.maxMemoryPerQuery isn't set, then it is automatically calculated by dividing -memory.allowedPercent by -search.maxConcurrentRequests + The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -search.maxPointsPerTimeseries int The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000) diff --git a/docs/Single-server-VictoriaMetrics.md b/docs/Single-server-VictoriaMetrics.md index bd13dcf51..7996edb21 100644 --- a/docs/Single-server-VictoriaMetrics.md +++ b/docs/Single-server-VictoriaMetrics.md @@ -1268,7 +1268,7 @@ See also [resource usage limits docs](#resource-usage-limits). By default VictoriaMetrics is tuned for an optimal resource usage under typical workloads. Some workloads may need fine-grained resource usage limits. In these cases the following command-line flags may be useful: - `-memory.allowedPercent` and `-memory.allowedBytes` limit the amounts of memory, which may be used for various internal caches at VictoriaMetrics. Note that VictoriaMetrics may use more memory, since these flags don't limit additional memory, which may be needed on a per-query basis. -- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. By default this limit is calculated by dividing `-search.allowedPercent` by `-search.maxConcurrentRequests`. Sometimes a heavy query, which selects big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. +- `-search.maxMemoryPerQuery` limits the amounts of memory, which can be used for processing a single query. Queries, which need more memory, are rejected. Heavy queries, which select big number of time series, may exceed the per-query memory limit by a small percent. The total memory limit for concurrently executed queries can be estimated as `-search.maxMemoryPerQuery` multiplied by `-search.maxConcurrentRequests`. - `-search.maxUniqueTimeseries` limits the number of unique time series a single query can find and process. VictoriaMetrics keeps in memory some metainformation about the time series located by each query and spends some CPU time for processing the found time series. This means that the maximum memory usage and CPU usage a single query can use is proportional to `-search.maxUniqueTimeseries`. - `-search.maxQueryDuration` limits the duration of a single query. If the query takes longer than the given duration, then it is canceled. This allows saving CPU and RAM when executing unexpected heavy queries. - `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries. See also `-search.maxMemoryPerQuery` command-line flag. @@ -2242,7 +2242,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li -search.maxLookback duration Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons -search.maxMemoryPerQuery size - The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests . If the -search.maxMemoryPerQuery isn't set, then it is automatically calculated by dividing -memory.allowedPercent by -search.maxConcurrentRequests + The maximum amounts of memory a single query may consume. Queries requiring more memory are rejected. The total memory limit for concurrently executed queries can be estimated as -search.maxMemoryPerQuery multiplied by -search.maxConcurrentRequests Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0) -search.maxPointsPerTimeseries int The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)