app/vmselect/graphite: open source Graphite Render API

This commit is contained in:
Aliaksandr Valialkin 2023-03-31 23:25:04 -07:00
parent cddfc4d3f8
commit ffdf430be0
No known key found for this signature in database
GPG Key ID: A72BEC6CD3D0DED1
21 changed files with 15196 additions and 35 deletions

View File

@ -40,7 +40,8 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
* It is easy to setup and operate:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d)
without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -627,7 +628,6 @@ The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -829,10 +829,10 @@ VictoriaMetrics supports `__graphite__` pseudo-label for filtering time series w
### Graphite Render API usage
[VictoriaMetrics Enterprise](https://docs.victoriametrics.com/enterprise.html) supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
VictoriaMetrics supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
at `/render` endpoint, which is used by [Graphite datasource in Grafana](https://grafana.com/docs/grafana/latest/datasources/graphite/).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points
stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
### Graphite Metrics API usage
@ -2438,9 +2438,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.disableCache
Whether to disable response caching. This may be useful during data backfilling
-search.graphiteMaxPointsPerSeries int
The maximum number of points per series Graphite render API can return. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 1000000)
The maximum number of points per series Graphite render API can return (default 1000000)
-search.graphiteStorageStep duration
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 10s)
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s)
-search.latencyOffset duration
The time when data points become visible in query results after the collection. It can be overridden on per-query basis via latency_offset arg. Too small value can result in incomplete last points for query results (default 30s)
-search.logQueryMemoryUsage size
@ -2457,7 +2457,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxFederateSeries int
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 1000000)
-search.maxGraphiteSeries int
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 300000)
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxMemoryPerQuery size

View File

@ -0,0 +1,259 @@
package graphite
import (
"fmt"
"math"
"strings"
"sync"
"github.com/valyala/histogram"
)
var aggrFuncs = map[string]aggrFunc{
"average": aggrAvg,
"avg": aggrAvg,
"avg_zero": aggrAvgZero,
"median": aggrMedian,
"sum": aggrSum,
"total": aggrSum,
"min": aggrMin,
"max": aggrMax,
"diff": aggrDiff,
"pow": aggrPow,
"stddev": aggrStddev,
"count": aggrCount,
"range": aggrRange,
"rangeOf": aggrRange,
"multiply": aggrMultiply,
"first": aggrFirst,
"last": aggrLast,
"current": aggrLast,
}
func getAggrFunc(funcName string) (aggrFunc, error) {
s := strings.TrimSuffix(funcName, "Series")
aggrFunc := aggrFuncs[s]
if aggrFunc == nil {
return nil, fmt.Errorf("unsupported aggregate function %q", funcName)
}
return aggrFunc, nil
}
type aggrFunc func(values []float64) float64
func (af aggrFunc) apply(xFilesFactor float64, values []float64) float64 {
if aggrCount(values) >= float64(len(values))*xFilesFactor {
return af(values)
}
return nan
}
func aggrAvg(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
sum := values[pos]
count := 1
for _, v := range values[pos+1:] {
if !math.IsNaN(v) {
sum += v
count++
}
}
return sum / float64(count)
}
func aggrAvgZero(values []float64) float64 {
if len(values) == 0 {
return nan
}
sum := float64(0)
for _, v := range values {
if !math.IsNaN(v) {
sum += v
}
}
return sum / float64(len(values))
}
var aggrMedian = newAggrFuncPercentile(50)
func aggrSum(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
sum := values[pos]
for _, v := range values[pos+1:] {
if !math.IsNaN(v) {
sum += v
}
}
return sum
}
func aggrMin(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
min := values[pos]
for _, v := range values[pos+1:] {
if !math.IsNaN(v) && v < min {
min = v
}
}
return min
}
func aggrMax(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
max := values[pos]
for _, v := range values[pos+1:] {
if !math.IsNaN(v) && v > max {
max = v
}
}
return max
}
func aggrDiff(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
sum := float64(0)
for _, v := range values[pos+1:] {
if !math.IsNaN(v) {
sum += v
}
}
return values[pos] - sum
}
func aggrPow(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
pow := values[pos]
for _, v := range values[pos+1:] {
if !math.IsNaN(v) {
pow = math.Pow(pow, v)
}
}
return pow
}
func aggrStddev(values []float64) float64 {
avg := aggrAvg(values)
if math.IsNaN(avg) {
return nan
}
sum := float64(0)
count := 0
for _, v := range values {
if !math.IsNaN(v) {
d := avg - v
sum += d * d
count++
}
}
return math.Sqrt(sum / float64(count))
}
func aggrCount(values []float64) float64 {
count := 0
for _, v := range values {
if !math.IsNaN(v) {
count++
}
}
return float64(count)
}
func aggrRange(values []float64) float64 {
min := aggrMin(values)
if math.IsNaN(min) {
return nan
}
max := aggrMax(values)
return max - min
}
func aggrMultiply(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
p := values[pos]
for _, v := range values[pos+1:] {
if !math.IsNaN(v) {
p *= v
}
}
return p
}
func aggrFirst(values []float64) float64 {
pos := getFirstNonNaNPos(values)
if pos < 0 {
return nan
}
return values[pos]
}
func aggrLast(values []float64) float64 {
for i := len(values) - 1; i >= 0; i-- {
v := values[i]
if !math.IsNaN(v) {
return v
}
}
return nan
}
func getFirstNonNaNPos(values []float64) int {
for i, v := range values {
if !math.IsNaN(v) {
return i
}
}
return -1
}
var nan = math.NaN()
func newAggrFuncPercentile(n float64) aggrFunc {
f := func(values []float64) float64 {
h := getHistogram()
for _, v := range values {
if !math.IsNaN(v) {
h.Update(v)
}
}
p := h.Quantile(n / 100)
putHistogram(h)
return p
}
return f
}
func getHistogram() *histogram.Fast {
return histogramPool.Get().(*histogram.Fast)
}
func putHistogram(h *histogram.Fast) {
h.Reset()
histogramPool.Put(h)
}
var histogramPool = &sync.Pool{
New: func() interface{} {
return histogram.NewFast()
},
}

View File

@ -0,0 +1,724 @@
package graphite
import (
"fmt"
"math"
"strings"
"github.com/valyala/histogram"
)
var aggrStateFuncs = map[string]func(int) aggrState{
"average": newAggrStateAvg,
"avg": newAggrStateAvg,
"avg_zero": newAggrStateAvgZero,
"median": newAggrStateMedian,
"sum": newAggrStateSum,
"total": newAggrStateSum,
"min": newAggrStateMin,
"max": newAggrStateMax,
"diff": newAggrStateDiff,
"pow": newAggrStatePow,
"stddev": newAggrStateStddev,
"count": newAggrStateCount,
"range": newAggrStateRange,
"rangeOf": newAggrStateRange,
"multiply": newAggrStateMultiply,
"first": newAggrStateFirst,
"last": newAggrStateLast,
"current": newAggrStateLast,
}
type aggrState interface {
Update(values []float64)
Finalize(xFilesFactor float64) []float64
}
func newAggrState(pointsLen int, funcName string) (aggrState, error) {
s := strings.TrimSuffix(funcName, "Series")
asf := aggrStateFuncs[s]
if asf == nil {
return nil, fmt.Errorf("unsupported aggregate function %q", funcName)
}
return asf(pointsLen), nil
}
type aggrStateAvg struct {
pointsLen int
sums []float64
counts []int
seriesTotal int
}
func newAggrStateAvg(pointsLen int) aggrState {
return &aggrStateAvg{
pointsLen: pointsLen,
sums: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateAvg) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
sums := as.sums
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
sums[i] += v
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStateAvg) Finalize(xFilesFactor float64) []float64 {
sums := as.sums
counts := as.counts
values := make([]float64, as.pointsLen)
xff := int(xFilesFactor * float64(as.seriesTotal))
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = sums[i] / float64(count)
}
values[i] = v
}
return values
}
type aggrStateAvgZero struct {
pointsLen int
sums []float64
seriesTotal int
}
func newAggrStateAvgZero(pointsLen int) aggrState {
return &aggrStateAvgZero{
pointsLen: pointsLen,
sums: make([]float64, pointsLen),
}
}
func (as *aggrStateAvgZero) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
sums := as.sums
for i, v := range values {
if !math.IsNaN(v) {
sums[i] += v
}
}
as.seriesTotal++
}
func (as *aggrStateAvgZero) Finalize(xFilesFactor float64) []float64 {
sums := as.sums
values := make([]float64, as.pointsLen)
count := float64(as.seriesTotal)
for i, sum := range sums {
v := nan
if count > 0 {
v = sum / count
}
values[i] = v
}
return values
}
func newAggrStateMedian(pointsLen int) aggrState {
return newAggrStatePercentile(pointsLen, 50)
}
type aggrStatePercentile struct {
phi float64
pointsLen int
hs []*histogram.Fast
counts []int
seriesTotal int
}
func newAggrStatePercentile(pointsLen int, n float64) aggrState {
hs := make([]*histogram.Fast, pointsLen)
for i := 0; i < pointsLen; i++ {
hs[i] = histogram.NewFast()
}
return &aggrStatePercentile{
phi: n / 100,
pointsLen: pointsLen,
hs: hs,
counts: make([]int, pointsLen),
}
}
func (as *aggrStatePercentile) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
hs := as.hs
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
hs[i].Update(v)
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStatePercentile) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
hs := as.hs
for i, count := range as.counts {
v := nan
if count > 0 && count >= xff {
v = hs[i].Quantile(as.phi)
}
values[i] = v
}
return values
}
type aggrStateSum struct {
pointsLen int
sums []float64
counts []int
seriesTotal int
}
func newAggrStateSum(pointsLen int) aggrState {
return &aggrStateSum{
pointsLen: pointsLen,
sums: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateSum) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
sums := as.sums
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
sums[i] += v
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStateSum) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
sums := as.sums
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = sums[i]
}
values[i] = v
}
return values
}
type aggrStateMin struct {
pointsLen int
mins []float64
counts []int
seriesTotal int
}
func newAggrStateMin(pointsLen int) aggrState {
return &aggrStateMin{
pointsLen: pointsLen,
mins: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateMin) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
mins := as.mins
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
counts[i]++
if counts[i] == 1 {
mins[i] = v
} else if v < mins[i] {
mins[i] = v
}
}
as.seriesTotal++
}
func (as *aggrStateMin) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
mins := as.mins
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = mins[i]
}
values[i] = v
}
return values
}
type aggrStateMax struct {
pointsLen int
maxs []float64
counts []int
seriesTotal int
}
func newAggrStateMax(pointsLen int) aggrState {
return &aggrStateMax{
pointsLen: pointsLen,
maxs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateMax) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
maxs := as.maxs
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
counts[i]++
if counts[i] == 1 {
maxs[i] = v
} else if v > maxs[i] {
maxs[i] = v
}
}
as.seriesTotal++
}
func (as *aggrStateMax) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
maxs := as.maxs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = maxs[i]
}
values[i] = v
}
return values
}
type aggrStateDiff struct {
pointsLen int
vs []float64
counts []int
seriesTotal int
}
func newAggrStateDiff(pointsLen int) aggrState {
return &aggrStateDiff{
pointsLen: pointsLen,
vs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateDiff) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
vs := as.vs
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
if counts[i] == 0 {
vs[i] = v
} else {
vs[i] -= v
}
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStateDiff) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
vs := as.vs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = vs[i]
}
values[i] = v
}
return values
}
type aggrStatePow struct {
pointsLen int
vs []float64
counts []int
seriesTotal int
}
func newAggrStatePow(pointsLen int) aggrState {
return &aggrStatePow{
pointsLen: pointsLen,
vs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStatePow) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
vs := as.vs
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
if counts[i] == 0 {
vs[i] = v
} else {
vs[i] = math.Pow(vs[i], v)
}
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStatePow) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
vs := as.vs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = vs[i]
}
values[i] = v
}
return values
}
type aggrStateStddev struct {
pointsLen int
means []float64
m2s []float64
counts []int
seriesTotal int
}
func newAggrStateStddev(pointsLen int) aggrState {
return &aggrStateStddev{
pointsLen: pointsLen,
means: make([]float64, pointsLen),
m2s: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateStddev) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
means := as.means
m2s := as.m2s
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
// See https://en.m.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm
count := counts[i]
mean := means[i]
count++
delta := v - mean
mean += delta / float64(count)
delta2 := v - mean
means[i] = mean
m2s[i] += delta * delta2
counts[i] = count
}
as.seriesTotal++
}
func (as *aggrStateStddev) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
m2s := as.m2s
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = math.Sqrt(m2s[i] / float64(count))
}
values[i] = v
}
return values
}
type aggrStateCount struct {
pointsLen int
counts []int
seriesTotal int
}
func newAggrStateCount(pointsLen int) aggrState {
return &aggrStateCount{
pointsLen: pointsLen,
counts: make([]int, pointsLen),
}
}
func (as *aggrStateCount) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
counts := as.counts
for i, v := range values {
if !math.IsNaN(v) {
counts[i]++
}
}
as.seriesTotal++
}
func (as *aggrStateCount) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = float64(count)
}
values[i] = v
}
return values
}
type aggrStateRange struct {
pointsLen int
mins []float64
maxs []float64
counts []int
seriesTotal int
}
func newAggrStateRange(pointsLen int) aggrState {
return &aggrStateRange{
pointsLen: pointsLen,
mins: make([]float64, pointsLen),
maxs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateRange) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
mins := as.mins
maxs := as.maxs
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
counts[i]++
if counts[i] == 1 {
mins[i] = v
maxs[i] = v
} else if v < mins[i] {
mins[i] = v
} else if v > maxs[i] {
maxs[i] = v
}
}
as.seriesTotal++
}
func (as *aggrStateRange) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
mins := as.mins
maxs := as.maxs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = maxs[i] - mins[i]
}
values[i] = v
}
return values
}
type aggrStateMultiply struct {
pointsLen int
ms []float64
counts []int
seriesTotal int
}
func newAggrStateMultiply(pointsLen int) aggrState {
return &aggrStateMultiply{
pointsLen: pointsLen,
ms: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateMultiply) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
ms := as.ms
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
counts[i]++
if counts[i] == 1 {
ms[i] = v
} else {
ms[i] *= v
}
}
as.seriesTotal++
}
func (as *aggrStateMultiply) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
ms := as.ms
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = ms[i]
}
values[i] = v
}
return values
}
type aggrStateFirst struct {
pointsLen int
vs []float64
counts []int
seriesTotal int
}
func newAggrStateFirst(pointsLen int) aggrState {
return &aggrStateFirst{
pointsLen: pointsLen,
vs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateFirst) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
vs := as.vs
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
counts[i]++
if counts[i] == 1 {
vs[i] = v
}
}
as.seriesTotal++
}
func (as *aggrStateFirst) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
vs := as.vs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = vs[i]
}
values[i] = v
}
return values
}
type aggrStateLast struct {
pointsLen int
vs []float64
counts []int
seriesTotal int
}
func newAggrStateLast(pointsLen int) aggrState {
return &aggrStateLast{
pointsLen: pointsLen,
vs: make([]float64, pointsLen),
counts: make([]int, pointsLen),
}
}
func (as *aggrStateLast) Update(values []float64) {
if len(values) != as.pointsLen {
panic(fmt.Errorf("BUG: unexpected number of points in values; got %d; want %d", len(values), as.pointsLen))
}
vs := as.vs
counts := as.counts
for i, v := range values {
if math.IsNaN(v) {
continue
}
vs[i] = v
counts[i]++
}
as.seriesTotal++
}
func (as *aggrStateLast) Finalize(xFilesFactor float64) []float64 {
xff := int(xFilesFactor * float64(as.seriesTotal))
values := make([]float64, as.pointsLen)
vs := as.vs
counts := as.counts
for i, count := range counts {
v := nan
if count > 0 && count >= xff {
v = vs[i]
}
values[i] = v
}
return values
}

View File

@ -0,0 +1,210 @@
package graphite
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/graphiteql"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
)
var maxGraphiteSeries = flag.Int("search.maxGraphiteSeries", 300e3, "The maximum number of time series, which can be scanned during queries to Graphite Render API. "+
"See https://docs.victoriametrics.com/#graphite-render-api-usage")
type evalConfig struct {
startTime int64
endTime int64
storageStep int64
deadline searchutils.Deadline
currentTime time.Time
// xFilesFactor is used for determining when consolidateFunc must be applied.
//
// 0 means that consolidateFunc should be applied if at least a single non-NaN data point exists on the given step.
// 1 means that consolidateFunc should be applied if all the data points are non-NaN on the given step.
xFilesFactor float64
// Enforced tag filters
etfs [][]storage.TagFilter
// originalQuery contains the original query - used for debug logging.
originalQuery string
}
func (ec *evalConfig) pointsLen(step int64) int {
return int((ec.endTime - ec.startTime) / step)
}
func (ec *evalConfig) newTimestamps(step int64) []int64 {
pointsLen := ec.pointsLen(step)
timestamps := make([]int64, pointsLen)
ts := ec.startTime
for i := 0; i < pointsLen; i++ {
timestamps[i] = ts
ts += step
}
return timestamps
}
type series struct {
Name string
Tags map[string]string
Timestamps []int64
Values []float64
// holds current path expression like graphite does.
pathExpression string
expr graphiteql.Expr
// consolidateFunc is applied to raw samples in order to generate data points algined to the given step.
// see series.consolidate() function for details.
consolidateFunc aggrFunc
// xFilesFactor is used for determining when consolidateFunc must be applied.
//
// 0 means that consolidateFunc should be applied if at least a single non-NaN data point exists on the given step.
// 1 means that consolidateFunc should be applied if all the data points are non-NaN on the given step.
xFilesFactor float64
step int64
}
func (s *series) consolidate(ec *evalConfig, step int64) {
aggrFunc := s.consolidateFunc
if aggrFunc == nil {
aggrFunc = aggrAvg
}
xFilesFactor := s.xFilesFactor
if s.xFilesFactor <= 0 {
xFilesFactor = ec.xFilesFactor
}
s.summarize(aggrFunc, ec.startTime, ec.endTime, step, xFilesFactor)
}
func (s *series) summarize(aggrFunc aggrFunc, startTime, endTime, step int64, xFilesFactor float64) {
pointsLen := int((endTime - startTime) / step)
timestamps := s.Timestamps
values := s.Values
dstTimestamps := make([]int64, 0, pointsLen)
dstValues := make([]float64, 0, pointsLen)
ts := startTime
i := 0
for len(dstTimestamps) < pointsLen {
tsEnd := ts + step
j := i
for j < len(timestamps) && timestamps[j] < tsEnd {
j++
}
if i == j && i > 0 && ts-timestamps[i-1] <= 2000 {
// The current [ts ... tsEnd) interval has no samples,
// but the last sample on the previous interval [ts - step ... ts)
// is closer than 2 seconds to the current interval.
// Let's consider that this sample belongs to the current interval,
// since such discrepancy could appear because of small jitter in samples' ingestion.
i--
}
v := aggrFunc.apply(xFilesFactor, values[i:j])
dstTimestamps = append(dstTimestamps, ts)
dstValues = append(dstValues, v)
ts = tsEnd
i = j
}
// Do not reuse s.Timestamps and s.Values, since they can be too big
s.Timestamps = dstTimestamps
s.Values = dstValues
s.step = step
}
func execExpr(ec *evalConfig, query string) (nextSeriesFunc, error) {
expr, err := graphiteql.Parse(query)
if err != nil {
return nil, fmt.Errorf("cannot parse %q: %w", query, err)
}
return evalExpr(ec, expr)
}
func evalExpr(ec *evalConfig, expr graphiteql.Expr) (nextSeriesFunc, error) {
switch t := expr.(type) {
case *graphiteql.MetricExpr:
return evalMetricExpr(ec, t)
case *graphiteql.FuncExpr:
return evalFuncExpr(ec, t)
default:
return nil, fmt.Errorf("unexpected expression type %T; want graphiteql.MetricExpr or graphiteql.FuncExpr; expr: %q", t, t.AppendString(nil))
}
}
func evalMetricExpr(ec *evalConfig, me *graphiteql.MetricExpr) (nextSeriesFunc, error) {
tfs := []storage.TagFilter{{
Key: []byte("__graphite__"),
Value: []byte(me.Query),
}}
tfss := joinTagFilterss(tfs, ec.etfs)
sq := storage.NewSearchQuery(ec.startTime, ec.endTime, tfss, *maxGraphiteSeries)
return newNextSeriesForSearchQuery(ec, sq, me)
}
func newNextSeriesForSearchQuery(ec *evalConfig, sq *storage.SearchQuery, expr graphiteql.Expr) (nextSeriesFunc, error) {
rss, err := netstorage.ProcessSearchQuery(nil, sq, ec.deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
seriesCh := make(chan *series, cgroup.AvailableCPUs())
errCh := make(chan error, 1)
go func() {
err := rss.RunParallel(nil, func(rs *netstorage.Result, workerID uint) error {
nameWithTags := getCanonicalPath(&rs.MetricName)
tags := unmarshalTags(nameWithTags)
s := &series{
Name: tags["name"],
Tags: tags,
Timestamps: append([]int64{}, rs.Timestamps...),
Values: append([]float64{}, rs.Values...),
expr: expr,
pathExpression: string(expr.AppendString(nil)),
}
s.summarize(aggrAvg, ec.startTime, ec.endTime, ec.storageStep, 0)
t := timerpool.Get(30 * time.Second)
select {
case seriesCh <- s:
case <-t.C:
logger.Errorf("resource leak when processing the %s (full query: %s); please report this error to VictoriaMetrics developers",
expr.AppendString(nil), ec.originalQuery)
}
timerpool.Put(t)
return nil
})
close(seriesCh)
errCh <- err
}()
f := func() (*series, error) {
s := <-seriesCh
if s != nil {
return s, nil
}
err := <-errCh
return nil, err
}
return f, nil
}
func evalFuncExpr(ec *evalConfig, fe *graphiteql.FuncExpr) (nextSeriesFunc, error) {
// Do not lowercase the fe.FuncName, since Graphite function names are case-sensitive.
tf := transformFuncs[fe.FuncName]
if tf == nil {
return nil, fmt.Errorf("unknown function %q", fe.FuncName)
}
nextSeries, err := tf(ec, fe)
if err != nil {
return nil, fmt.Errorf("cannot evaluate %s: %w", fe.AppendString(nil), err)
}
return nextSeries, nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,88 @@
package graphite
import (
// embed functions.json file
_ "embed"
"encoding/json"
"fmt"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
)
// FunctionsHandler implements /functions handler.
//
// See https://graphite.readthedocs.io/en/latest/functions.html#function-api
func FunctionsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
grouped := searchutils.GetBool(r, "grouped")
group := r.FormValue("group")
result := make(map[string]interface{})
for funcName, fi := range funcs {
if group != "" && fi.Group != group {
continue
}
if grouped {
v := result[fi.Group]
if v == nil {
v = make(map[string]*funcInfo)
result[fi.Group] = v
}
m := v.(map[string]*funcInfo)
m[funcName] = fi
} else {
result[funcName] = fi
}
}
return writeJSON(result, w, r)
}
// FunctionDetailsHandler implements /functions/<func_name> handler.
//
// See https://graphite.readthedocs.io/en/latest/functions.html#function-api
func FunctionDetailsHandler(startTime time.Time, funcName string, w http.ResponseWriter, r *http.Request) error {
result := funcs[funcName]
if result == nil {
return fmt.Errorf("cannot find function %q", funcName)
}
return writeJSON(result, w, r)
}
func writeJSON(result interface{}, w http.ResponseWriter, r *http.Request) error {
data, err := json.Marshal(result)
if err != nil {
return fmt.Errorf("cannot marshal response to JSON: %w", err)
}
jsonp := r.FormValue("jsonp")
contentType := getContentType(jsonp)
w.Header().Set("Content-Type", contentType)
if jsonp != "" {
fmt.Fprintf(w, "%s(", jsonp)
}
w.Write(data)
if jsonp != "" {
fmt.Fprintf(w, ")")
}
return nil
}
//go:embed functions.json
var funcsJSON []byte
type funcInfo struct {
Name string `json:"name"`
Function string `json:"function"`
Description string `json:"description"`
Module string `json:"module"`
Group string `json:"group"`
Params json.RawMessage `json:"params"`
}
var funcs = func() map[string]*funcInfo {
var m map[string]*funcInfo
if err := json.Unmarshal(funcsJSON, &m); err != nil {
// Do not use logger.Panicf, since it isn't ready yet.
panic(fmt.Errorf("cannot parse funcsJSON: %s", err))
}
return m
}()

View File

@ -0,0 +1,48 @@
package graphite
import (
"strconv"
)
func naturalLess(a, b string) bool {
for {
var aPrefix, bPrefix string
aPrefix, a = getNonNumPrefix(a)
bPrefix, b = getNonNumPrefix(b)
if aPrefix != bPrefix {
return aPrefix < bPrefix
}
if len(a) == 0 || len(b) == 0 {
return a < b
}
var aNum, bNum int
aNum, a = getNumPrefix(a)
bNum, b = getNumPrefix(b)
if aNum != bNum {
return aNum < bNum
}
}
}
func getNonNumPrefix(s string) (prefix string, tail string) {
for i := 0; i < len(s); i++ {
ch := s[i]
if ch >= '0' && ch <= '9' {
return s[:i], s[i:]
}
}
return s, ""
}
func getNumPrefix(s string) (prefix int, tail string) {
i := 0
for i < len(s) {
ch := s[i]
if ch < '0' || ch > '9' {
break
}
i++
}
prefix, _ = strconv.Atoi(s[:i])
return prefix, s[i:]
}

View File

@ -0,0 +1,29 @@
package graphite
import (
"testing"
)
func TestNaturalLess(t *testing.T) {
f := func(a, b string, okExpected bool) {
t.Helper()
ok := naturalLess(a, b)
if ok != okExpected {
t.Fatalf("unexpected result for naturalLess(%q, %q); got %v; want %v", a, b, ok, okExpected)
}
}
f("", "", false)
f("a", "b", true)
f("", "foo", true)
f("foo", "", false)
f("foo", "foo", false)
f("b", "a", false)
f("1", "2", true)
f("10", "2", false)
f("foo100", "foo12", false)
f("foo12", "foo100", true)
f("10foo2", "10foo10", true)
f("10foo10", "10foo2", false)
f("foo1bar10", "foo1bar2aa", false)
f("foo1bar2aa", "foo1bar10aa", true)
}

View File

@ -0,0 +1,273 @@
package graphite
import (
"flag"
"fmt"
"net/http"
"strconv"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/metrics"
)
var (
storageStep = flag.Duration("search.graphiteStorageStep", 10*time.Second, "The interval between datapoints stored in the database. "+
"It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. "+
"It can be overridden by sending 'storage_step' query arg to /render API or "+
"by sending the desired interval via 'Storage-Step' http header during querying /render API")
maxPointsPerSeries = flag.Int("search.graphiteMaxPointsPerSeries", 1e6, "The maximum number of points per series Graphite render API can return")
)
// RenderHandler implements /render endpoint from Graphite Render API.
//
// See https://graphite.readthedocs.io/en/stable/render_api.html
func RenderHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
format := r.FormValue("format")
if format != "json" {
return fmt.Errorf("unsupported format=%q; supported values: json", format)
}
xFilesFactor := float64(0)
if xff := r.FormValue("xFilesFactor"); len(xff) > 0 {
f, err := strconv.ParseFloat(xff, 64)
if err != nil {
return fmt.Errorf("cannot parse xFilesFactor=%q: %w", xff, err)
}
xFilesFactor = f
}
from := r.FormValue("from")
fromTime := startTime.UnixNano()/1e6 - 24*3600*1000
if len(from) != 0 {
fv, err := parseTime(startTime, from)
if err != nil {
return fmt.Errorf("cannot parse from=%q: %w", from, err)
}
fromTime = fv
}
until := r.FormValue("until")
untilTime := startTime.UnixNano() / 1e6
if len(until) != 0 {
uv, err := parseTime(startTime, until)
if err != nil {
return fmt.Errorf("cannot parse until=%q: %w", until, err)
}
untilTime = uv
}
storageStep, err := getStorageStep(r)
if err != nil {
return err
}
fromAlign := fromTime % storageStep
fromTime -= fromAlign
if fromAlign > 0 {
fromTime += storageStep
}
untilAlign := untilTime % storageStep
untilTime -= untilAlign
if untilAlign > 0 {
untilTime += storageStep
}
if untilTime < fromTime {
return fmt.Errorf("from=%s cannot exceed until=%s", from, until)
}
pointsPerSeries := (untilTime - fromTime) / storageStep
if pointsPerSeries > int64(*maxPointsPerSeries) {
return fmt.Errorf("too many points per series must be returned on the given [from=%s ... until=%s] time range and the given storageStep=%d: %d; "+
"either reduce the time range or increase -search.graphiteMaxPointsPerSeries=%d", from, until, storageStep, pointsPerSeries, *maxPointsPerSeries)
}
maxDataPoints := 0
if s := r.FormValue("maxDataPoints"); len(s) > 0 {
n, err := strconv.ParseFloat(s, 64)
if err != nil {
return fmt.Errorf("cannot parse maxDataPoints=%q: %w", maxDataPoints, err)
}
if n <= 0 {
return fmt.Errorf("maxDataPoints must be greater than 0; got %f", n)
}
maxDataPoints = int(n)
}
etfs, err := searchutils.GetExtraTagFilters(r)
if err != nil {
return fmt.Errorf("cannot setup tag filters: %w", err)
}
var nextSeriess []nextSeriesFunc
targets := r.Form["target"]
for _, target := range targets {
ec := &evalConfig{
startTime: fromTime,
endTime: untilTime,
storageStep: storageStep,
deadline: deadline,
currentTime: startTime,
xFilesFactor: xFilesFactor,
etfs: etfs,
originalQuery: target,
}
nextSeries, err := execExpr(ec, target)
if err != nil {
for _, f := range nextSeriess {
_, _ = drainAllSeries(f)
}
return fmt.Errorf("cannot eval target=%q: %w", target, err)
}
// do not use nextSeriesConcurrentWrapper here in order to preserve series order.
if maxDataPoints > 0 {
step := (ec.endTime - ec.startTime) / int64(maxDataPoints)
nextSeries = nextSeriesSerialWrapper(nextSeries, func(s *series) (*series, error) {
aggrFunc := s.consolidateFunc
if aggrFunc == nil {
aggrFunc = aggrAvg
}
xFilesFactor := s.xFilesFactor
if s.xFilesFactor <= 0 {
xFilesFactor = ec.xFilesFactor
}
if len(s.Values) > maxDataPoints {
s.summarize(aggrFunc, ec.startTime, ec.endTime, step, xFilesFactor)
}
return s, nil
})
}
nextSeriess = append(nextSeriess, nextSeries)
}
f := nextSeriesGroup(nextSeriess, nil)
jsonp := r.FormValue("jsonp")
contentType := getContentType(jsonp)
w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteRenderJSONResponse(bw, f, jsonp)
if err := bw.Flush(); err != nil {
return err
}
renderDuration.UpdateDuration(startTime)
return nil
}
var renderDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/render"}`)
const msecsPerDay = 24 * 3600 * 1000
// parseTime parses Graphite time in s.
//
// If the time in s is relative, then it is relative to startTime.
func parseTime(startTime time.Time, s string) (int64, error) {
switch s {
case "now":
return startTime.UnixNano() / 1e6, nil
case "today":
ts := startTime.UnixNano() / 1e6
return ts - ts%msecsPerDay, nil
case "yesterday":
ts := startTime.UnixNano() / 1e6
return ts - (ts % msecsPerDay) - msecsPerDay, nil
}
// Attempt to parse RFC3339 (YYYY-MM-DDTHH:mm:SSZTZ:00)
if t, err := time.Parse(time.RFC3339, s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse HH:MM_YYYYMMDD
if t, err := time.Parse("15:04_20060102", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse HH:MMYYYYMMDD
if t, err := time.Parse("15:0420060102", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse YYYYMMDD
if t, err := time.Parse("20060102", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse HH:MM YYYYMMDD
if t, err := time.Parse("15:04 20060102", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse YYYY-MM-DD
if t, err := time.Parse("2006-01-02", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse MM/DD/YY
if t, err := time.Parse("01/02/06", s); err == nil {
return t.UnixNano() / 1e6, nil
}
// Attempt to parse time as unix timestamp
if n, err := strconv.ParseInt(s, 10, 64); err == nil {
return n * 1000, nil
}
// Attempt to parse interval
if interval, err := parseInterval(s); err == nil {
return startTime.UnixNano()/1e6 + interval, nil
}
return 0, fmt.Errorf("unsupported time %q", s)
}
func parseInterval(s string) (int64, error) {
s = strings.TrimSpace(s)
prefix := s
var suffix string
for i := 0; i < len(s); i++ {
ch := s[i]
if ch != '-' && ch != '+' && ch != '.' && (ch < '0' || ch > '9') {
prefix = s[:i]
suffix = s[i:]
break
}
}
n, err := strconv.ParseFloat(prefix, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse interval %q: %w", s, err)
}
suffix = strings.TrimSpace(suffix)
if len(suffix) == 0 {
return 0, fmt.Errorf("missing suffix for interval %q; expecting s, min, h, d, w, mon or y suffix", s)
}
var m float64
switch {
case strings.HasPrefix(suffix, "ms"):
m = 1
case strings.HasPrefix(suffix, "s"):
m = 1000
case strings.HasPrefix(suffix, "mi"),
strings.HasPrefix(suffix, "m") && !strings.HasPrefix(suffix, "mo"):
m = 60 * 1000
case strings.HasPrefix(suffix, "h"):
m = 3600 * 1000
case strings.HasPrefix(suffix, "d"):
m = 24 * 3600 * 1000
case strings.HasPrefix(suffix, "w"):
m = 7 * 24 * 3600 * 1000
case strings.HasPrefix(suffix, "mo"):
m = 30 * 24 * 3600 * 1000
case strings.HasPrefix(suffix, "y"):
m = 365 * 24 * 3600 * 1000
default:
return 0, fmt.Errorf("unsupported interval %q", s)
}
return int64(n * m), nil
}
func getStorageStep(r *http.Request) (int64, error) {
s := r.FormValue("storage_step")
if len(s) == 0 {
s = r.Header.Get("Storage-Step")
}
if len(s) == 0 {
step := int64(storageStep.Seconds() * 1000)
if step <= 0 {
return 0, fmt.Errorf("the `-search.graphiteStorageStep` command-line flag value must be positive; got %s", storageStep.String())
}
return step, nil
}
step, err := parseInterval(s)
if err != nil {
return 0, fmt.Errorf("cannot parse datapoints interval %s: %w", s, err)
}
if step <= 0 {
return 0, fmt.Errorf("storage_step cannot be negative; got %s", s)
}
return step, nil
}

View File

@ -0,0 +1,103 @@
package graphite
import (
"testing"
"time"
)
func TestParseIntervalSuccess(t *testing.T) {
f := func(s string, intervalExpected int64) {
t.Helper()
interval, err := parseInterval(s)
if err != nil {
t.Fatalf("unexpected error in parseInterva(%q): %s", s, err)
}
if interval != intervalExpected {
t.Fatalf("unexpected result for parseInterval(%q); got %d; want %d", s, interval, intervalExpected)
}
}
f(`1ms`, 1)
f(`-10.5ms`, -10)
f(`+5.5s`, 5500)
f(`7.85s`, 7850)
f(`-7.85sec`, -7850)
f(`-7.85secs`, -7850)
f(`5seconds`, 5000)
f(`10min`, 10*60*1000)
f(`10 mins`, 10*60*1000)
f(` 10 mins `, 10*60*1000)
f(`10m`, 10*60*1000)
f(`-10.5min`, -10.5*60*1000)
f(`-10.5m`, -10.5*60*1000)
f(`3minutes`, 3*60*1000)
f(`3h`, 3*3600*1000)
f(`-4.5hour`, -4.5*3600*1000)
f(`7hours`, 7*3600*1000)
f(`5d`, 5*24*3600*1000)
f(`-3.5days`, -3.5*24*3600*1000)
f(`0.5w`, 0.5*7*24*3600*1000)
f(`10weeks`, 10*7*24*3600*1000)
f(`2months`, 2*30*24*3600*1000)
f(`2mo`, 2*30*24*3600*1000)
f(`1.2y`, 1.2*365*24*3600*1000)
f(`-3years`, -3*365*24*3600*1000)
}
func TestParseIntervalError(t *testing.T) {
f := func(s string) {
t.Helper()
interval, err := parseInterval(s)
if err == nil {
t.Fatalf("expecting non-nil error for parseInterval(%q)", s)
}
if interval != 0 {
t.Fatalf("unexpected non-zero interval for parseInterval(%q): %d", s, interval)
}
}
f("")
f("foo")
f(`'1minute'`)
f(`123`)
}
func TestParseTimeSuccess(t *testing.T) {
startTime := time.Now()
startTimestamp := startTime.UnixNano() / 1e6
f := func(s string, timestampExpected int64) {
t.Helper()
timestamp, err := parseTime(startTime, s)
if err != nil {
t.Fatalf("unexpected error from parseTime(%q): %s", s, err)
}
if timestamp != timestampExpected {
t.Fatalf("unexpected timestamp returned from parseTime(%q); got %d; want %d", s, timestamp, timestampExpected)
}
}
f("now", startTimestamp)
f("today", startTimestamp-startTimestamp%msecsPerDay)
f("yesterday", startTimestamp-(startTimestamp%msecsPerDay)-msecsPerDay)
f("1234567890", 1234567890000)
f("18:36_20210223", 1614105360000)
f("20210223", 1614038400000)
f("02/23/21", 1614038400000)
f("2021-02-23", 1614038400000)
f("2021-02-23T18:36:12Z", 1614105372000)
f("-3hours", startTimestamp-3*3600*1000)
f("1.5minutes", startTimestamp+1.5*60*1000)
}
func TestParseTimeFailure(t *testing.T) {
f := func(s string) {
t.Helper()
timestamp, err := parseTime(time.Now(), s)
if err == nil {
t.Fatalf("expecting non-nil error for parseTime(%q)", s)
}
if timestamp != 0 {
t.Fatalf("expecting zero value for parseTime(%q); got %d", s, timestamp)
}
}
f("")
f("foobar")
f("1235aafb")
}

View File

@ -0,0 +1,59 @@
{% stripspace %}
{% import (
"math"
"sort"
) %}
RenderJSONResponse generates response for /render?format=json .
See https://graphite.readthedocs.io/en/stable/render_api.html#json
{% func RenderJSONResponse(nextSeries nextSeriesFunc, jsonp string) %}
{% if jsonp != "" %}{%s= jsonp %}({% endif %}
{% code ss, err := fetchAllSeries(nextSeries) %}
{% if err != nil %}
{
"error": {%q= err.Error() %}
}
{% return %}
{% endif %}
{% code sort.Slice(ss, func(i, j int) bool { return ss[i].Name < ss[j].Name }) %}
[
{% for i, s := range ss %}
{%= renderSeriesJSON(s) %}
{% if i+1 < len(ss) %},{% endif %}
{% endfor %}
]
{% if jsonp != "" %}){% endif %}
{% endfunc %}
{% func renderSeriesJSON(s *series) %}
{
"target": {%q= s.Name %},
"tags":{
{% code
tagKeys := make([]string, 0, len(s.Tags))
for k := range s.Tags {
tagKeys = append(tagKeys, k)
}
sort.Strings(tagKeys)
%}
{% for i, k := range tagKeys %}
{% code v := s.Tags[k] %}
{%q= k %}: {%q= v %}
{% if i+1 < len(tagKeys) %},{% endif %}
{% endfor %}
},
"datapoints":[
{% code timestamps := s.Timestamps %}
{% for i, v := range s.Values %}
[
{% if math.IsNaN(v) %}null{% else %}{%f= v %}{% endif %},
{%dl= timestamps[i]/1e3 %}
]
{% if i+1 < len(timestamps) %},{% endif %}
{% endfor %}
]
}
{% endfunc %}
{% endstripspace %}

View File

@ -0,0 +1,203 @@
// Code generated by qtc from "render_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/graphite/render_response.qtpl:3
package graphite
//line app/vmselect/graphite/render_response.qtpl:3
import (
"math"
"sort"
)
// RenderJSONResponse generates response for /render?format=json .See https://graphite.readthedocs.io/en/stable/render_api.html#json
//line app/vmselect/graphite/render_response.qtpl:10
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/graphite/render_response.qtpl:10
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/graphite/render_response.qtpl:10
func StreamRenderJSONResponse(qw422016 *qt422016.Writer, nextSeries nextSeriesFunc, jsonp string) {
//line app/vmselect/graphite/render_response.qtpl:11
if jsonp != "" {
//line app/vmselect/graphite/render_response.qtpl:11
qw422016.N().S(jsonp)
//line app/vmselect/graphite/render_response.qtpl:11
qw422016.N().S(`(`)
//line app/vmselect/graphite/render_response.qtpl:11
}
//line app/vmselect/graphite/render_response.qtpl:12
ss, err := fetchAllSeries(nextSeries)
//line app/vmselect/graphite/render_response.qtpl:13
if err != nil {
//line app/vmselect/graphite/render_response.qtpl:13
qw422016.N().S(`{"error":`)
//line app/vmselect/graphite/render_response.qtpl:15
qw422016.N().Q(err.Error())
//line app/vmselect/graphite/render_response.qtpl:15
qw422016.N().S(`}`)
//line app/vmselect/graphite/render_response.qtpl:17
return
//line app/vmselect/graphite/render_response.qtpl:18
}
//line app/vmselect/graphite/render_response.qtpl:19
sort.Slice(ss, func(i, j int) bool { return ss[i].Name < ss[j].Name })
//line app/vmselect/graphite/render_response.qtpl:19
qw422016.N().S(`[`)
//line app/vmselect/graphite/render_response.qtpl:21
for i, s := range ss {
//line app/vmselect/graphite/render_response.qtpl:22
streamrenderSeriesJSON(qw422016, s)
//line app/vmselect/graphite/render_response.qtpl:23
if i+1 < len(ss) {
//line app/vmselect/graphite/render_response.qtpl:23
qw422016.N().S(`,`)
//line app/vmselect/graphite/render_response.qtpl:23
}
//line app/vmselect/graphite/render_response.qtpl:24
}
//line app/vmselect/graphite/render_response.qtpl:24
qw422016.N().S(`]`)
//line app/vmselect/graphite/render_response.qtpl:26
if jsonp != "" {
//line app/vmselect/graphite/render_response.qtpl:26
qw422016.N().S(`)`)
//line app/vmselect/graphite/render_response.qtpl:26
}
//line app/vmselect/graphite/render_response.qtpl:27
}
//line app/vmselect/graphite/render_response.qtpl:27
func WriteRenderJSONResponse(qq422016 qtio422016.Writer, nextSeries nextSeriesFunc, jsonp string) {
//line app/vmselect/graphite/render_response.qtpl:27
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/render_response.qtpl:27
StreamRenderJSONResponse(qw422016, nextSeries, jsonp)
//line app/vmselect/graphite/render_response.qtpl:27
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/render_response.qtpl:27
}
//line app/vmselect/graphite/render_response.qtpl:27
func RenderJSONResponse(nextSeries nextSeriesFunc, jsonp string) string {
//line app/vmselect/graphite/render_response.qtpl:27
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/render_response.qtpl:27
WriteRenderJSONResponse(qb422016, nextSeries, jsonp)
//line app/vmselect/graphite/render_response.qtpl:27
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/render_response.qtpl:27
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/render_response.qtpl:27
return qs422016
//line app/vmselect/graphite/render_response.qtpl:27
}
//line app/vmselect/graphite/render_response.qtpl:29
func streamrenderSeriesJSON(qw422016 *qt422016.Writer, s *series) {
//line app/vmselect/graphite/render_response.qtpl:29
qw422016.N().S(`{"target":`)
//line app/vmselect/graphite/render_response.qtpl:31
qw422016.N().Q(s.Name)
//line app/vmselect/graphite/render_response.qtpl:31
qw422016.N().S(`,"tags":{`)
//line app/vmselect/graphite/render_response.qtpl:34
tagKeys := make([]string, 0, len(s.Tags))
for k := range s.Tags {
tagKeys = append(tagKeys, k)
}
sort.Strings(tagKeys)
//line app/vmselect/graphite/render_response.qtpl:40
for i, k := range tagKeys {
//line app/vmselect/graphite/render_response.qtpl:41
v := s.Tags[k]
//line app/vmselect/graphite/render_response.qtpl:42
qw422016.N().Q(k)
//line app/vmselect/graphite/render_response.qtpl:42
qw422016.N().S(`:`)
//line app/vmselect/graphite/render_response.qtpl:42
qw422016.N().Q(v)
//line app/vmselect/graphite/render_response.qtpl:43
if i+1 < len(tagKeys) {
//line app/vmselect/graphite/render_response.qtpl:43
qw422016.N().S(`,`)
//line app/vmselect/graphite/render_response.qtpl:43
}
//line app/vmselect/graphite/render_response.qtpl:44
}
//line app/vmselect/graphite/render_response.qtpl:44
qw422016.N().S(`},"datapoints":[`)
//line app/vmselect/graphite/render_response.qtpl:47
timestamps := s.Timestamps
//line app/vmselect/graphite/render_response.qtpl:48
for i, v := range s.Values {
//line app/vmselect/graphite/render_response.qtpl:48
qw422016.N().S(`[`)
//line app/vmselect/graphite/render_response.qtpl:50
if math.IsNaN(v) {
//line app/vmselect/graphite/render_response.qtpl:50
qw422016.N().S(`null`)
//line app/vmselect/graphite/render_response.qtpl:50
} else {
//line app/vmselect/graphite/render_response.qtpl:50
qw422016.N().F(v)
//line app/vmselect/graphite/render_response.qtpl:50
}
//line app/vmselect/graphite/render_response.qtpl:50
qw422016.N().S(`,`)
//line app/vmselect/graphite/render_response.qtpl:51
qw422016.N().DL(timestamps[i] / 1e3)
//line app/vmselect/graphite/render_response.qtpl:51
qw422016.N().S(`]`)
//line app/vmselect/graphite/render_response.qtpl:53
if i+1 < len(timestamps) {
//line app/vmselect/graphite/render_response.qtpl:53
qw422016.N().S(`,`)
//line app/vmselect/graphite/render_response.qtpl:53
}
//line app/vmselect/graphite/render_response.qtpl:54
}
//line app/vmselect/graphite/render_response.qtpl:54
qw422016.N().S(`]}`)
//line app/vmselect/graphite/render_response.qtpl:57
}
//line app/vmselect/graphite/render_response.qtpl:57
func writerenderSeriesJSON(qq422016 qtio422016.Writer, s *series) {
//line app/vmselect/graphite/render_response.qtpl:57
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/render_response.qtpl:57
streamrenderSeriesJSON(qw422016, s)
//line app/vmselect/graphite/render_response.qtpl:57
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/render_response.qtpl:57
}
//line app/vmselect/graphite/render_response.qtpl:57
func renderSeriesJSON(s *series) string {
//line app/vmselect/graphite/render_response.qtpl:57
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/render_response.qtpl:57
writerenderSeriesJSON(qb422016, s)
//line app/vmselect/graphite/render_response.qtpl:57
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/render_response.qtpl:57
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/render_response.qtpl:57
return qs422016
//line app/vmselect/graphite/render_response.qtpl:57
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,81 @@
package graphite
import (
"reflect"
"testing"
)
func TestUnmarshalTags(t *testing.T) {
f := func(s string, tagsExpected map[string]string) {
t.Helper()
tags := unmarshalTags(s)
if !reflect.DeepEqual(tags, tagsExpected) {
t.Fatalf("unexpected tags unmarshaled for s=%q\ngot\n%s\nwant\n%s", s, tags, tagsExpected)
}
}
f("", map[string]string{})
f("foo.bar", map[string]string{
"name": "foo.bar",
})
f("foo;bar=baz", map[string]string{
"name": "foo",
"bar": "baz",
})
f("foo.bar;bar;x=aa;baz=aaa;x=y", map[string]string{
"name": "foo.bar",
"baz": "aaa",
"x": "y",
})
}
func TestMarshalTags(t *testing.T) {
f := func(s, sExpected string) {
t.Helper()
tags := unmarshalTags(s)
sMarshaled := marshalTags(tags)
if sMarshaled != sExpected {
t.Fatalf("unexpected marshaled tags for s=%q\ngot\n%s\nwant\n%s", s, sMarshaled, sExpected)
}
}
f("", "")
f("foo", "foo")
f("foo;bar=baz", "foo;bar=baz")
f("foo.bar;baz;xx=yy;a=b", "foo.bar;a=b;xx=yy")
f("foo.bar;a=bb;a=ccc;d=a.b.c", "foo.bar;a=ccc;d=a.b.c")
}
func TestGetPathFromName(t *testing.T) {
f := func(name, pathExpected string) {
t.Helper()
path := getPathFromName(name)
if path != pathExpected {
t.Fatalf("unexpected path extracted from name %q; got %q; want %q", name, path, pathExpected)
}
}
f("", "")
f("foo", "foo")
f("foo.bar", "foo.bar")
f("foo.bar,baz.aa", "foo.bar,baz.aa")
f("foo(bar.baz,aa.bb)", "bar.baz")
f("foo(1, 'foo', aaa )", "aaa")
f("foo|bar(baz)", "foo")
f("a(b(c.d.e))", "c.d.e")
f("foo()", "foo()")
f("123", "123")
f("foo(123)", "123")
f("fo(bar", "fo(bar")
}
func TestGraphiteToGolangRegexpReplace(t *testing.T) {
f := func(s, replaceExpected string) {
t.Helper()
replace := graphiteToGolangRegexpReplace(s)
if replace != replaceExpected {
t.Fatalf("unexpected result for graphiteToGolangRegexpReplace(%q); got %q; want %q", s, replace, replaceExpected)
}
}
f("", "")
f("foo", "foo")
f(`a\d+`, `a\d+`)
f(`\1f\\oo\2`, `$1f\\oo$2`)
}

View File

@ -224,9 +224,23 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
if strings.HasPrefix(path, "/functions") {
graphiteFunctionsRequests.Inc()
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, "%s", `{}`)
funcName := path[len("/functions"):]
funcName = strings.TrimPrefix(funcName, "/")
if funcName == "" {
graphiteFunctionsRequests.Inc()
if err := graphite.FunctionsHandler(startTime, w, r); err != nil {
graphiteFunctionsErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
return true
}
graphiteFunctionDetailsRequests.Inc()
if err := graphite.FunctionDetailsHandler(startTime, funcName, w, r); err != nil {
graphiteFunctionDetailsErrors.Inc()
httpserver.Errorf(w, r, "%s", err)
return true
}
return true
}
@ -437,6 +451,14 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
return true
case "/render":
graphiteRenderRequests.Inc()
if err := graphite.RenderHandler(startTime, w, r); err != nil {
graphiteRenderErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/metric-relabel-debug":
promscrapeMetricRelabelDebugRequests.Inc()
promscrape.WriteMetricRelabelDebug(w, r)
@ -611,10 +633,17 @@ var (
graphiteTagsDelSeriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/delSeries"}`)
graphiteTagsDelSeriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/delSeries"}`)
graphiteRenderRequests = metrics.NewCounter(`vm_http_requests_total{path="/render"}`)
graphiteRenderErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/render"}`)
promscrapeMetricRelabelDebugRequests = metrics.NewCounter(`vm_http_requests_total{path="/metric-relabel-debug"}`)
promscrapeTargetRelabelDebugRequests = metrics.NewCounter(`vm_http_requests_total{path="/target-relabel-debug"}`)
graphiteFunctionsRequests = metrics.NewCounter(`vm_http_requests_total{path="/functions"}`)
graphiteFunctionsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/functions"}`)
graphiteFunctionDetailsRequests = metrics.NewCounter(`vm_http_requests_total{path="/functions/<func_name>"}`)
graphiteFunctionDetailsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/functions/<func_name>"}`)
expandWithExprsRequests = metrics.NewCounter(`vm_http_requests_total{path="/expand-with-exprs"}`)

View File

@ -21,6 +21,7 @@ created by v1.90.0 or newer versions. The solution is to upgrade to v1.90.0 or n
* SECURITY: upgrade base docker image (alpine) from 3.17.2 to 3.17.3. See [alpine 3.17.3 release notes](https://alpinelinux.org/posts/Alpine-3.17.3-released.html).
* FEATURE: open source [Graphite Render API](https://docs.victoriametrics.com/#graphite-render-api-usage). This API allows using VictoriaMetrics as a drop-in replacement for Graphite at both data ingestion and querying sides and reducing infrastructure costs by up to 10x comparing to Graphite. See [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly) as an example.
* FEATURE: release Windows binaries for [single-node VictoriaMetrics](https://docs.victoriametrics.com/), [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html), [vmbackup](https://docs.victoriametrics.com/vmbackup.html) and [vmrestore](https://docs.victoriametrics.com/vmrestore.html). See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3236), [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3821) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/70) issues.
* FEATURE: log metrics with truncated labels if the length of label value in the ingested metric exceeds `-maxLabelValueLen`. This should simplify debugging for this case.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): show target URL when debugging [target relabeling](https://docs.victoriametrics.com/vmagent.html#relabel-debug). This should simplify target relabel debugging a bit. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3882).

View File

@ -359,7 +359,7 @@ Check practical examples of VictoriaMetrics API [here](https://docs.victoriametr
- URLs for [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api): `http://<vmselect>:8481/select/<accountID>/graphite/<suffix>`, where:
- `<accountID>` is an arbitrary number identifying data namespace for query (aka tenant)
- `<suffix>` may have the following values:
- `render` - implements Graphite Render API. See [these docs](https://graphite.readthedocs.io/en/stable/render_api.html). This functionality is available in [Enterprise package](https://docs.victoriametrics.com/enterprise.html). Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
- `render` - implements Graphite Render API. See [these docs](https://graphite.readthedocs.io/en/stable/render_api.html).
- `metrics/find` - searches Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find).
- `metrics/expand` - expands Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-expand).
- `metrics/index.json` - returns all the metric names. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-index-json).
@ -1128,9 +1128,9 @@ Below is the output for `/path/to/vmselect -help`:
-search.disableCache
Whether to disable response caching. This may be useful during data backfilling
-search.graphiteMaxPointsPerSeries int
The maximum number of points per series Graphite render API can return. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 1000000)
The maximum number of points per series Graphite render API can return (default 1000000)
-search.graphiteStorageStep duration
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 10s)
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s)
-search.latencyOffset duration
The time when data points become visible in query results after the collection. It can be overridden on per-query basis via latency_offset arg. Too small value can result in incomplete last points for query results (default 30s)
-search.logQueryMemoryUsage size
@ -1147,7 +1147,7 @@ Below is the output for `/path/to/vmselect -help`:
-search.maxFederateSeries int
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 1000000)
-search.maxGraphiteSeries int
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 300000)
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxMemoryPerQuery size

View File

@ -41,7 +41,8 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
* It is easy to setup and operate:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d)
without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -628,7 +629,6 @@ The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -830,10 +830,10 @@ VictoriaMetrics supports `__graphite__` pseudo-label for filtering time series w
### Graphite Render API usage
[VictoriaMetrics Enterprise](https://docs.victoriametrics.com/enterprise.html) supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
VictoriaMetrics supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
at `/render` endpoint, which is used by [Graphite datasource in Grafana](https://grafana.com/docs/grafana/latest/datasources/graphite/).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points
stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
### Graphite Metrics API usage
@ -2439,9 +2439,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.disableCache
Whether to disable response caching. This may be useful during data backfilling
-search.graphiteMaxPointsPerSeries int
The maximum number of points per series Graphite render API can return. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 1000000)
The maximum number of points per series Graphite render API can return (default 1000000)
-search.graphiteStorageStep duration
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 10s)
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s)
-search.latencyOffset duration
The time when data points become visible in query results after the collection. It can be overridden on per-query basis via latency_offset arg. Too small value can result in incomplete last points for query results (default 30s)
-search.logQueryMemoryUsage size
@ -2458,7 +2458,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxFederateSeries int
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 1000000)
-search.maxGraphiteSeries int
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 300000)
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxMemoryPerQuery size

View File

@ -44,7 +44,8 @@ VictoriaMetrics has the following prominent features:
* It can be used as long-term storage for Prometheus. See [these docs](#prometheus-setup) for details.
* It can be used as a drop-in replacement for Prometheus in Grafana, because it supports [Prometheus querying API](#prometheus-querying-api-usage).
* It can be used as a drop-in replacement for Graphite in Grafana, because it supports [Graphite API](#graphite-api-usage).
* It features easy setup and operation:
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite - see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
* It is easy to setup and operate:
* VictoriaMetrics consists of a single [small executable](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d)
without external dependencies.
* All the configuration is done via explicit command-line flags with reasonable defaults.
@ -631,7 +632,6 @@ The `__graphite__` pseudo-label supports e.g. alternate regexp filters such as `
VictoriaMetrics also supports Graphite query language - see [these docs](#graphite-render-api-usage).
## How to send data from OpenTSDB-compatible agents
VictoriaMetrics supports [telnet put protocol](http://opentsdb.net/docs/build/html/api_telnet/put.html)
@ -833,10 +833,10 @@ VictoriaMetrics supports `__graphite__` pseudo-label for filtering time series w
### Graphite Render API usage
[VictoriaMetrics Enterprise](https://docs.victoriametrics.com/enterprise.html) supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
VictoriaMetrics supports [Graphite Render API](https://graphite.readthedocs.io/en/stable/render_api.html) subset
at `/render` endpoint, which is used by [Graphite datasource in Grafana](https://grafana.com/docs/grafana/latest/datasources/graphite/).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
Enterprise binaries can be downloaded and evaluated for free from [the releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
When configuring Graphite datasource in Grafana, the `Storage-Step` http request header must be set to a step between Graphite data points
stored in VictoriaMetrics. For example, `Storage-Step: 10s` would mean 10 seconds distance between Graphite datapoints stored in VictoriaMetrics.
### Graphite Metrics API usage
@ -2442,9 +2442,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.disableCache
Whether to disable response caching. This may be useful during data backfilling
-search.graphiteMaxPointsPerSeries int
The maximum number of points per series Graphite render API can return. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 1000000)
The maximum number of points per series Graphite render API can return (default 1000000)
-search.graphiteStorageStep duration
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 10s)
The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overridden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s)
-search.latencyOffset duration
The time when data points become visible in query results after the collection. It can be overridden on per-query basis via latency_offset arg. Too small value can result in incomplete last points for query results (default 30s)
-search.logQueryMemoryUsage size
@ -2461,7 +2461,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxFederateSeries int
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 1000000)
-search.maxGraphiteSeries int
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 300000)
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxMemoryPerQuery size

View File

@ -34,10 +34,6 @@ plus the following additional features:
by specifying different retentions to different datasets.
- [Automatic discovery of vmstorage nodes](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery) -
this feature allows updating the list of `vmstorage` nodes at `vminsert` and `vmselect` without the need to restart these services.
- [Graphite querying](https://docs.victoriametrics.com/#graphite-render-api-usage) - this feature allows seamless
transition from Graphite to VictoriaMetrics without the need to modify queries at dashboards and alerts.
VictoriaMetrics allows reducing infrastructure costs by more than 10x comparing to Graphite -
see [this case study](https://docs.victoriametrics.com/CaseStudies.html#grammarly).
- [Backup automation](https://docs.victoriametrics.com/vmbackupmanager.html).
- [Advanced per-tenant stats](https://docs.victoriametrics.com/PerTenantStatistic.html).
- [Advanced auth and rate limiter](https://docs.victoriametrics.com/vmgateway.html).