diff --git a/README.md b/README.md index 1b06d67f0..9cb0ba6dc 100644 --- a/README.md +++ b/README.md @@ -270,25 +270,21 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781) Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics: -
```yml remote_write: - url: http://:8428/api/v1/write ``` -
Substitute `` with hostname or IP address of VictoriaMetrics. Then apply new config via the following command: -
```console kill -HUP `pidof prometheus` ``` -
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel. This means that data remains available in local storage for `--storage.tsdb.retention.time` duration @@ -309,7 +305,6 @@ across Prometheus instances, so time series could be filtered and grouped by thi For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied: -
```yaml remote_write: @@ -320,7 +315,6 @@ remote_write: max_shards: 30 ``` -
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. @@ -529,24 +523,20 @@ or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configu To configure DataDog agent via ENV variable add the following prefix: -
``` DD_DD_URL=http://victoriametrics:8428/datadog ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ To configure DataDog agent via [configuration file](https://github.com/DataDog/datadog-agent/blob/878600ef7a55c5ef0efb41ed0915f020cf7e3bd0/pkg/config/config_template.yaml#L33) add the following line: -
``` dd_url: http://victoriametrics:8428/datadog ``` -
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data, pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats. @@ -562,14 +552,12 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver: -
``` DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}' ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ @@ -577,7 +565,6 @@ _Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/ To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files) add the following line: -
``` additional_endpoints: @@ -585,7 +572,6 @@ additional_endpoints: - apikey ``` -
### Send via cURL @@ -654,24 +640,20 @@ foo_field2{tag1="value1", tag2="value2"} 40 Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/) to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' ``` -
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -698,13 +680,11 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -735,13 +715,11 @@ VictoriaMetrics sets the current time if the timestamp is omitted. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -786,24 +764,20 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`: -
```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 ``` -
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -824,33 +798,26 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing a single data point: -
```console curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put ``` -
Example for writing multiple data points in a single request: -
- ```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put ``` -
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -1286,13 +1253,11 @@ In this case the output may contain multiple lines with samples for the same tim Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporting big amounts of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: -
```console curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz ``` -
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. @@ -1506,23 +1471,19 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via ` For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics: -
```console curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' ``` -
The following command may be used for verifying the imported data: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' ``` -
It should return something like the following: @@ -1532,24 +1493,20 @@ It should return something like the following: The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels: -
```console curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123' ``` -
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: -
```console # Import gzipped data to : curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz ``` -
Extra labels may be added to all the imported metrics either via [Pushgateway format](https://github.com/prometheus/pushgateway#url) or by passing `extra_label=name=value` query args. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. @@ -2446,23 +2403,19 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt * Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof ``` -
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning. diff --git a/docs/Cluster-VictoriaMetrics.md b/docs/Cluster-VictoriaMetrics.md index 134ff1901..70c0ee43a 100644 --- a/docs/Cluster-VictoriaMetrics.md +++ b/docs/Cluster-VictoriaMetrics.md @@ -860,17 +860,14 @@ All the cluster components provide the following handlers for [profiling](https: Example command for collecting cpu profile from `vmstorage` (replace `0.0.0.0` with `vmstorage` hostname if needed): -
```console curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof ``` -
Example command for collecting memory profile from `vminsert` (replace `0.0.0.0` with `vminsert` hostname if needed): -
```console curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof @@ -878,7 +875,6 @@ curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof It is safe sharing the collected profiles from security point of view, since they do not contain sensitive information. -
## vmalert diff --git a/docs/Quick-Start.md b/docs/Quick-Start.md index c0c67522e..1db4f9ffe 100644 --- a/docs/Quick-Start.md +++ b/docs/Quick-Start.md @@ -48,14 +48,12 @@ The following commands download the latest available and start it at port 8428, while storing the ingested data at `victoria-metrics-data` subdirectory under the current directory: -
```console docker pull victoriametrics/victoria-metrics:latest docker run -it --rm -v `pwd`/victoria-metrics-data:/victoria-metrics-data -p 8428:8428 victoriametrics/victoria-metrics:latest ``` -
Open http://localhost:8428 in web browser and read [these docs](https://docs.victoriametrics.com/#operation). @@ -71,14 +69,12 @@ and start the docker container via 'make docker-cluster-up'. Further customizati the [docker-compose-cluster.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/docker-compose-cluster.yml) file. -
```console git clone https://github.com/VictoriaMetrics/VictoriaMetrics && cd VictoriaMetrics make docker-cluster-up ``` -
See more details [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#readme). diff --git a/docs/README.md b/docs/README.md index 95a08c161..34803f56e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -273,25 +273,21 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781) Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics: -
```yml remote_write: - url: http://:8428/api/v1/write ``` -
Substitute `` with hostname or IP address of VictoriaMetrics. Then apply new config via the following command: -
```console kill -HUP `pidof prometheus` ``` -
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel. This means that data remains available in local storage for `--storage.tsdb.retention.time` duration @@ -312,7 +308,6 @@ across Prometheus instances, so time series could be filtered and grouped by thi For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied: -
```yaml remote_write: @@ -323,7 +318,6 @@ remote_write: max_shards: 30 ``` -
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. @@ -532,24 +526,20 @@ or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configu To configure DataDog agent via ENV variable add the following prefix: -
``` DD_DD_URL=http://victoriametrics:8428/datadog ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ To configure DataDog agent via [configuration file](https://github.com/DataDog/datadog-agent/blob/878600ef7a55c5ef0efb41ed0915f020cf7e3bd0/pkg/config/config_template.yaml#L33) add the following line: -
``` dd_url: http://victoriametrics:8428/datadog ``` -
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data, pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats. @@ -565,14 +555,12 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver: -
``` DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}' ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ @@ -580,7 +568,6 @@ _Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/ To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files) add the following line: -
``` additional_endpoints: @@ -588,7 +575,6 @@ additional_endpoints: - apikey ``` -
### Send via cURL @@ -657,24 +643,20 @@ foo_field2{tag1="value1", tag2="value2"} 40 Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/) to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' ``` -
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -701,13 +683,11 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -738,13 +718,11 @@ VictoriaMetrics sets the current time if the timestamp is omitted. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -789,24 +767,20 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`: -
```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 ``` -
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -827,33 +801,27 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing a single data point: -
```console curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put ``` -
Example for writing multiple data points in a single request: -
```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put ``` -
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -1289,13 +1257,11 @@ In this case the output may contain multiple lines with samples for the same tim Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporting big amounts of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: -
```console curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz ``` -
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. @@ -1509,23 +1475,19 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via ` For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics: -
```console curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' ``` -
The following command may be used for verifying the imported data: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' ``` -
It should return something like the following: @@ -1535,24 +1497,20 @@ It should return something like the following: The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels: -
```console curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123' ``` -
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: -
```console # Import gzipped data to : curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz ``` -
Extra labels may be added to all the imported metrics either via [Pushgateway format](https://github.com/prometheus/pushgateway#url) or by passing `extra_label=name=value` query args. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. @@ -2449,23 +2407,19 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt * Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof ``` -
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning. diff --git a/docs/Single-server-VictoriaMetrics.md b/docs/Single-server-VictoriaMetrics.md index d5ad175e4..c2dc0c8dc 100644 --- a/docs/Single-server-VictoriaMetrics.md +++ b/docs/Single-server-VictoriaMetrics.md @@ -281,25 +281,21 @@ See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3781) Add the following lines to Prometheus config file (it is usually located at `/etc/prometheus/prometheus.yml`) in order to send data to VictoriaMetrics: -
```yml remote_write: - url: http://:8428/api/v1/write ``` -
Substitute `` with hostname or IP address of VictoriaMetrics. Then apply new config via the following command: -
```console kill -HUP `pidof prometheus` ``` -
Prometheus writes incoming data to local storage and replicates it to remote storage in parallel. This means that data remains available in local storage for `--storage.tsdb.retention.time` duration @@ -320,7 +316,6 @@ across Prometheus instances, so time series could be filtered and grouped by thi For highly loaded Prometheus instances (200k+ samples per second) the following tuning may be applied: -
```yaml remote_write: @@ -331,7 +326,6 @@ remote_write: max_shards: 30 ``` -
Using remote write increases memory usage for Prometheus by up to ~25%. If you are experiencing issues with too high memory consumption of Prometheus, then try to lower `max_samples_per_send` and `capacity` params. @@ -540,24 +534,20 @@ or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configu To configure DataDog agent via ENV variable add the following prefix: -
``` DD_DD_URL=http://victoriametrics:8428/datadog ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ To configure DataDog agent via [configuration file](https://github.com/DataDog/datadog-agent/blob/878600ef7a55c5ef0efb41ed0915f020cf7e3bd0/pkg/config/config_template.yaml#L33) add the following line: -
``` dd_url: http://victoriametrics:8428/datadog ``` -
[vmagent](https://docs.victoriametrics.com/vmagent.html) also can accept Datadog metrics format. Depending on where vmagent will forward data, pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.html#datadog) formats. @@ -573,14 +563,12 @@ sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `ad Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver: -
``` DD_ADDITIONAL_ENDPOINTS='{\"http://victoriametrics:8428/datadog\": [\"apikey\"]}' ``` -
_Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/url-examples.html#datadog)._ @@ -588,7 +576,6 @@ _Choose correct URL for VictoriaMetrics [here](https://docs.victoriametrics.com/ To configure DataDog Dual Shipping via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files) add the following line: -
``` additional_endpoints: @@ -596,7 +583,6 @@ additional_endpoints: - apikey ``` -
### Send via cURL @@ -665,24 +651,20 @@ foo_field2{tag1="value1", tag2="value2"} 40 Example for writing data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/) to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write' ``` -
An arbitrary number of lines delimited by '\n' (aka newline char) can be sent in a single request. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -709,13 +691,11 @@ VictoriaMetrics exposes endpoint for InfluxDB v2 HTTP API at `/influx/api/v2/wri In order to write data with InfluxDB line protocol to local VictoriaMetrics using `curl`: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/api/v2/write' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -746,13 +726,11 @@ VictoriaMetrics sets the current time if the timestamp is omitted. An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -797,24 +775,20 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing data with OpenTSDB protocol to local VictoriaMetrics using `nc`: -
```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 ``` -
An arbitrary number of lines delimited by `\n` (aka newline char) can be sent in one go. After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -835,33 +809,27 @@ Send data to the given address from OpenTSDB-compatible agents. Example for writing a single data point: -
```console curl -H 'Content-Type: application/json' -d '{"metric":"x.y.z","value":45.34,"tags":{"t1":"v1","t2":"v2"}}' http://localhost:4242/api/put ``` -
Example for writing multiple data points in a single request: -
```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put ``` -
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar' ``` -
The `/api/v1/export` endpoint should return the following response: @@ -1297,13 +1265,11 @@ In this case the output may contain multiple lines with samples for the same tim Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporting big amounts of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data: -
```console curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]={__name__!=""}' > data.jsonl.gz ``` -
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag. @@ -1517,23 +1483,19 @@ and in [Pushgateway format](https://github.com/prometheus/pushgateway#url) via ` For example, the following command imports a single line in Prometheus exposition format into VictoriaMetrics: -
```console curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus' ``` -
The following command may be used for verifying the imported data: -
```console curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}' ``` -
It should return something like the following: @@ -1543,24 +1505,20 @@ It should return something like the following: The following command imports a single metric via [Pushgateway format](https://github.com/prometheus/pushgateway#url) with `{job="my_app",instance="host123"}` labels: -
```console curl -d 'metric{label="abc"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus/metrics/job/my_app/instance/host123' ``` -
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/prometheus` for importing gzipped data: -
```console # Import gzipped data to : curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/prometheus -T prometheus_data.gz ``` -
Extra labels may be added to all the imported metrics either via [Pushgateway format](https://github.com/prometheus/pushgateway#url) or by passing `extra_label=name=value` query args. For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics. @@ -2457,23 +2415,19 @@ VictoriaMetrics provides handlers for collecting the following [Go profiles](htt * Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof ``` -
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning. diff --git a/docs/url-examples.md b/docs/url-examples.md index 3b2a6ae26..3447d62a7 100644 --- a/docs/url-examples.md +++ b/docs/url-examples.md @@ -17,17 +17,14 @@ menu: Note that handler accepts any HTTP method, so sending a `GET` request to `/api/v1/admin/tsdb/delete_series` will result in deletion of time series. Single-node VictoriaMetrics: -
```console curl -v http://localhost:8428/api/v1/admin/tsdb/delete_series -d 'match[]=vm_http_request_errors_total' ``` -
The expected output should return [HTTP Status 204](https://datatracker.ietf.org/doc/html/rfc7231#page-53) and will look like: -
```console * Trying 127.0.0.1:8428... @@ -45,20 +42,16 @@ The expected output should return [HTTP Status 204](https://datatracker.ietf.org * Connection #0 to host 127.0.0.1 left intact ``` -
Cluster version of VictoriaMetrics: -
```console curl -v http://:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series -d 'match[]=vm_http_request_errors_total' ``` -
The expected output should return [HTTP Status 204](https://datatracker.ietf.org/doc/html/rfc7231#page-53) and will look like: -
```console * Trying 127.0.0.1:8481... @@ -76,7 +69,6 @@ The expected output should return [HTTP Status 204](https://datatracker.ietf.org * Connection #0 to host 127.0.0.1 left intact ``` -
Additional information: @@ -88,22 +80,18 @@ Additional information: **Exports raw samples from VictoriaMetrics in JSON line format** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/api/v1/export -d 'match[]=vm_http_request_errors_total' > filename.json ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/export -d 'match[]=vm_http_request_errors_total' > filename.json ``` -
Additional information: @@ -117,22 +105,18 @@ Additional information: **Exports raw samples from VictoriaMetrics in CSV format** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/api/v1/export/csv -d 'format=__name__,__value__,__timestamp__:unix_s' -d 'match[]=vm_http_request_errors_total' > filename.csv ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/export/csv -d 'format=__name__,__value__,__timestamp__:unix_s' -d 'match[]=vm_http_request_errors_total' > filename.csv ``` -
Additional information: @@ -145,22 +129,18 @@ Additional information: **Exports raw samples from VictoriaMetrics in native format** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/api/v1/export/native -d 'match[]=vm_http_request_errors_total' > filename.bin ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/export/native -d 'match[]=vm_http_request_errors_total' > filename.bin ``` -
More information: @@ -173,22 +153,18 @@ More information: **Imports data to VictoriaMetrics in JSON line format** Single-node VictoriaMetrics: -
```console curl -H 'Content-Type: application/json' --data-binary "@filename.json" -X POST http://localhost:8428/api/v1/import ``` -
Cluster version of VictoriaMetrics: -
```console curl -H 'Content-Type: application/json' --data-binary "@filename.json" -X POST http://:8480/insert/0/prometheus/api/v1/import ``` -
More information: @@ -201,22 +177,18 @@ More information: **Imports CSV data to VictoriaMetrics** Single-node VictoriaMetrics: -
```console curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' ``` -
Cluster version of VictoriaMetrics: -
```console curl -d "GOOG,1.23,4.56,NYSE" 'http://:8480/insert/0/prometheus/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market' ``` -
Additional information: @@ -229,20 +201,16 @@ Additional information: **Imports data to VictoriaMetrics in native format** Single-node VictoriaMetrics: -
```console curl -X POST http://localhost:8428/api/v1/import/native -T filename.bin ``` -
Cluster version of VictoriaMetrics: -
```console curl -X POST http://:8480/insert/0/prometheus/api/v1/import/native -T filename.bin ``` -
Additional information: @@ -255,21 +223,17 @@ Additional information: **Imports data to VictoriaMetrics in Prometheus text exposition format** Single-node VictoriaMetrics: -
```console curl -d 'metric_name{foo="bar"} 123' -X POST http://localhost:8428/api/v1/import/prometheus ``` -
Cluster version of VictoriaMetrics: -
```console curl -d 'metric_name{foo="bar"} 123' -X POST http://:8480/insert/0/prometheus/api/v1/import/prometheus ``` -
Additional information: @@ -282,22 +246,18 @@ Additional information: **Get a list of label names at the given time range** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/labels ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/labels ``` -
By default, VictoriaMetrics returns labels seen during the last day starting at 00:00 UTC. An arbitrary time range can be set via [`start` and `end` query args](https://docs.victoriametrics.com/#timestamp-formats). The specified `start..end` time range is rounded to day granularity because of performance optimization concerns. @@ -312,22 +272,18 @@ Additional information: **Get a list of values for a particular label on the given time range** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/label/job/values ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/label/job/values ``` -
By default, VictoriaMetrics returns labels values seen during the last day starting at 00:00 UTC. An arbitrary time range can be set via `start` and `end` query args. The specified `start..end` time range is rounded to day granularity because of performance optimization concerns. @@ -342,22 +298,18 @@ Additional information: **Performs PromQL/MetricsQL instant query** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/query -d 'query=vm_http_request_errors_total' ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/query -d 'query=vm_http_request_errors_total' ``` -
Additional information: * [Prometheus querying API usage](https://docs.victoriametrics.com/#prometheus-querying-api-usage) @@ -370,22 +322,18 @@ Additional information: **Performs PromQL/MetricsQL range query** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/query_range -d 'query=sum(increase(vm_http_request_errors_total{job="foo"}[5m]))' -d 'start=-1d' -d 'step=1h' ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/query_range -d 'query=sum(increase(vm_http_request_errors_total{job="foo"}[5m]))' -d 'start=-1d' -d 'step=1h' ``` -
Additional information: * [Prometheus querying API usage](https://docs.victoriametrics.com/#prometheus-querying-api-usage) @@ -398,22 +346,18 @@ Additional information: **Returns series names with their labels on the given time range** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/series -d 'match[]=vm_http_request_errors_total' ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/series -d 'match[]=vm_http_request_errors_total' ``` -
By default, VictoriaMetrics returns time series seen during the last day starting at 00:00 UTC. An arbitrary time range can be set via `start` and `end` query args. The specified `start..end` time range is rounded to day granularity because of performance optimization concerns. @@ -429,22 +373,18 @@ VictoriaMetrics accepts `limit` query arg for `/api/v1/series` handlers for limi **Cardinality statistics** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/prometheus/api/v1/status/tsdb ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/api/v1/status/tsdb ``` -
Additional information: * [Prometheus querying API usage](https://docs.victoriametrics.com/#prometheus-querying-api-usage) @@ -455,30 +395,25 @@ Additional information: **DataDog URL for Single-node VictoriaMetrics** -
``` http://victoriametrics:8428/datadog ``` -
**DataDog URL for Cluster version of VictoriaMetrics** -
``` http://vminsert:8480/insert/0/datadog ``` -
### /datadog/api/v1/series **Imports data in DataDog v1 format into VictoriaMetrics** Single-node VictoriaMetrics: -
```console echo ' @@ -502,10 +437,8 @@ echo ' ' | curl -X POST -H 'Content-Type: application/json' --data-binary @- http://localhost:8428/datadog/api/v1/series ``` -
Cluster version of VictoriaMetrics: -
```console echo ' @@ -529,7 +462,6 @@ echo ' ' | curl -X POST -H 'Content-Type: application/json' --data-binary @- 'http://:8480/insert/0/datadog/api/v1/series' ``` -
Additional information: @@ -542,7 +474,6 @@ Additional information: **Imports data in [DataDog v2](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) format into VictoriaMetrics** Single-node VictoriaMetrics: -
```console echo ' @@ -570,10 +501,8 @@ echo ' ' | curl -X POST -H 'Content-Type: application/json' --data-binary @- http://localhost:8428/datadog/api/v2/series ``` -
Cluster version of VictoriaMetrics: -
```console echo ' @@ -601,7 +530,6 @@ echo ' ' | curl -X POST -H 'Content-Type: application/json' --data-binary @- 'http://:8480/insert/0/datadog/api/v2/series' ``` -
Additional information: @@ -613,22 +541,18 @@ Additional information: **Returns federated metrics** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/federate -d 'match[]=vm_http_request_errors_total' ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/prometheus/federate -d 'match[]=vm_http_request_errors_total' ``` -
Additional information: @@ -641,22 +565,18 @@ Additional information: **Searches Graphite metrics in VictoriaMetrics** Single-node VictoriaMetrics: -
```console curl http://localhost:8428/graphite/metrics/find -d 'query=vm_http_request_errors_total' ``` -
Cluster version of VictoriaMetrics: -
```console curl http://:8481/select/0/graphite/metrics/find -d 'query=vm_http_request_errors_total' ``` -
Additional information: @@ -670,22 +590,18 @@ Additional information: **Writes data with InfluxDB line protocol to VictoriaMetrics** Single-node VictoriaMetrics: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST http://localhost:8428/write ``` -
Cluster version of VictoriaMetrics: -
```console curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST http://:8480/insert/0/influx/write ``` -
Additional information: @@ -697,17 +613,14 @@ Additional information: **Resets the response cache for previously served queries. It is recommended to invoke after [backfilling](https://docs.victoriametrics.com/#backfilling) procedure.** Single-node VictoriaMetrics: -
```console curl -Is http://localhost:8428/internal/resetRollupResultCache ``` -
Cluster version of VictoriaMetrics: -
```console curl -Is http://:8481/select/internal/resetRollupResultCache @@ -717,7 +630,6 @@ vmselect will propagate this call to the rest of the vmselects listed in its `-s flag isn't set, then cache need to be purged from each vmselect individually. -
### TCP and UDP @@ -727,42 +639,34 @@ Turned off by default. Enable OpenTSDB receiver in VictoriaMetrics by setting `- *If run from docker, '-opentsdbListenAddr' port should be exposed* Single-node VictoriaMetrics: -
```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242 ``` -
Cluster version of VictoriaMetrics: -
```console echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N http:// 4242 ``` -
Enable HTTP server for OpenTSDB /api/put requests by setting `-opentsdbHTTPListenAddr` command-line flag. Single-node VictoriaMetrics: -
```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put ``` -
Cluster version of VictoriaMetrics: -
```console curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://:8480/insert/42/opentsdb/api/put ``` -
Additional information: @@ -774,22 +678,18 @@ Additional information: Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command-line flag. Single-node VictoriaMetrics: -
```console echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003 ``` -
Cluster version of VictoriaMetrics: -
```console echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N http:// 2003 ``` -
Additional information: diff --git a/docs/vmagent.md b/docs/vmagent.md index d8b203b82..5f7325fd6 100644 --- a/docs/vmagent.md +++ b/docs/vmagent.md @@ -1482,23 +1482,19 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b * Memory profile can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```bash curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof ``` -
* CPU profile can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```bash curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning. diff --git a/docs/vmalert.md b/docs/vmalert.md index 8ad72d2c4..4b9e39314 100644 --- a/docs/vmalert.md +++ b/docs/vmalert.md @@ -914,23 +914,19 @@ To disable stripping of such info pass `-datasource.showURL` cmd-line flag to vm * Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8880/debug/pprof/heap > mem.pprof ``` -
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8880/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning. diff --git a/docs/vmauth.md b/docs/vmauth.md index 0f06dc572..6a5e96d09 100644 --- a/docs/vmauth.md +++ b/docs/vmauth.md @@ -776,23 +776,19 @@ ROOT_IMAGE=scratch make package-vmauth * Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof ``` -
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed): -
```console curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof ``` -
The command for collecting CPU profile waits for 30 seconds before returning.