mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-01-20 15:29:24 +01:00
update wiki pages
parent
fcec91d776
commit
79f9ff2e14
@ -30,7 +30,7 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add verbose output for docker installations or when TTY isn't available. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4081).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): interrupt backoff retries when import process is cancelled. The change makes vmctl more responsive in case of errors during the import. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4442).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): update backoff policy on retries to reduce probability of overloading for `source` or `destination` databases. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4442).
|
||||
* FEATURE: vmstorage: suppress "broken pipe" errors for search queries on vmstorage side. See [this commit](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4418/commits/a6a7795b9e1f210d614a2c5f9a3016b97ded4792).
|
||||
* FEATURE: vmstorage: suppress "broken pipe" and "connection reset by peer" errors for search queries on vmstorage side. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4418/commits/a6a7795b9e1f210d614a2c5f9a3016b97ded4792) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4498/commits/830dac177f0f09032165c248943a5da0e10dfe90) commits.
|
||||
* FEATURE: [Official Grafana dashboards for VictoriaMetrics](https://grafana.com/orgs/victoriametrics): add panel for tracking rate of syscalls while writing or reading from disk via `process_io_(read|write)_syscalls_total` metrics.
|
||||
* FEATURE: accept timestamps in milliseconds at `start`, `end` and `time` query args in [Prometheus querying API](https://docs.victoriametrics.com/#prometheus-querying-api-usage). See [these docs](https://docs.victoriametrics.com/#timestamp-formats) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4459).
|
||||
|
||||
|
@ -8,9 +8,37 @@ before you start working with VictoriaLogs.
|
||||
|
||||
There are the following options exist:
|
||||
|
||||
- [To run pre-built binaries](#pre-built-binaries)
|
||||
- [To run Docker image](#docker-image)
|
||||
- [To run in Kubernetes with Helm charts](#helm-charts)
|
||||
- [To build VictoriaLogs from source code](#building-from-source-code)
|
||||
|
||||
### Pre-built binaries
|
||||
|
||||
Pre-built binaries for VictoriaLogs are availble at the [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/) page.
|
||||
Just download archive for the needed Operating system and architecture, unpack it and run `victoria-logs-prod` from it.
|
||||
|
||||
For example, the following commands download VictoriaLogs archive for Linux/amd64, unpack and run it:
|
||||
|
||||
```bash
|
||||
curl https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v0.1.0-victorialogs/victoria-logs-linux-amd64-v0.1.0-victorialogs.tar.gz
|
||||
tar xzf victoria-logs-linux-amd64-v0.1.0-victorialogs.tar.gz
|
||||
./victoria-logs-prod
|
||||
```
|
||||
|
||||
VictoriaLogs is ready for [data ingestion](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/)
|
||||
and [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) at the TCP port `9428` now!
|
||||
It has no any external dependencies, so it may run in various environments without additional setup and configuration.
|
||||
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
|
||||
the needed indexes during [data ingestion](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/).
|
||||
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/)
|
||||
|
||||
|
||||
### Docker image
|
||||
|
||||
You can run VictoriaLogs in a Docker container. It is the easiest way to start using VictoriaLogs.
|
||||
@ -18,9 +46,20 @@ Here is the command to run VictoriaLogs in a Docker container:
|
||||
|
||||
```bash
|
||||
docker run --rm -it -p 9428:9428 -v ./victoria-logs-data:/victoria-logs-data \
|
||||
docker.io/victoriametrics/victoria-logs:heads-public-single-node-0-ga638f5e2b
|
||||
docker.io/victoriametrics/victoria-logs:v0.1.0-victorialogs
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/)
|
||||
|
||||
### Helm charts
|
||||
|
||||
You can run VictoriaLogs in Kubernetes environment
|
||||
with [these Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-logs-single/README.md).
|
||||
|
||||
### Building from source code
|
||||
|
||||
Follow the following steps in order to build VictoriaLogs from source code:
|
||||
@ -50,11 +89,23 @@ It has no any external dependencies, so it may run in various environments witho
|
||||
VictoriaLogs automatically adapts to the available CPU and RAM resources. It also automatically setups and creates
|
||||
the needed indexes during [data ingestion](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/).
|
||||
|
||||
It is possible to change the TCP port via `-httpListenAddr` command-line flag. For example, the following command
|
||||
starts VictoriaLogs, which accepts incoming requests at port `9200` (aka ElasticSearch HTTP API port):
|
||||
See also:
|
||||
|
||||
- [How to configure VictoriaLogs](#how-to-configure-victorialogs)
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/)
|
||||
|
||||
|
||||
## How to configure VictoriaLogs
|
||||
|
||||
VictoriaLogs is configured via command-line flags. All the command-line flags have sane defaults,
|
||||
so there is no need in tuning them in general case. VictoriaLogs runs smoothly in most environments
|
||||
without additional configuration.
|
||||
|
||||
Pass `-help` to VictoriaLogs in order to see the list of supported command-line flags with their description and default values:
|
||||
|
||||
```bash
|
||||
/path/to/victoria-logs -httpListenAddr=:9200
|
||||
/path/to/victoria-logs -help
|
||||
```
|
||||
|
||||
VictoriaLogs stores the ingested data to the `victoria-logs-data` directory by default. The directory can be changed
|
||||
@ -66,3 +117,22 @@ E.g. it uses the retention of 7 days. Read [these docs](https://docs.victoriamet
|
||||
for the [ingested](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/) logs.
|
||||
|
||||
It is recommended setting up monitoring of VictoriaLogs according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/#monitoring).
|
||||
|
||||
See also:
|
||||
|
||||
- [How to ingest logs into VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/)
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/)
|
||||
|
||||
## Docker demos
|
||||
|
||||
Here are a Docker-compose demos, which start VictoriaLogs and push logs to it via various log collectors:
|
||||
|
||||
- [Filebeat demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker)
|
||||
- [Fluentbit demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker)
|
||||
- [Logstash demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash)
|
||||
- [Vector demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker)
|
||||
|
||||
You can use [this Helm chart](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-logs-single/README.md)
|
||||
as a demo for running Fluentbit in Kubernetes with VictoriaLogs.
|
||||
|
||||
|
||||
|
@ -12,11 +12,13 @@ It provides the following key features:
|
||||
see [LogsQL docs](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html).
|
||||
- VictoriaLogs can be seamlessly combined with good old Unix tools for log analysis such as `grep`, `less`, `sort`, `jq`, etc.
|
||||
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/#command-line) for details.
|
||||
- VictoriaLogs capacity and performance scales lineraly with the available resources (CPU, RAM, disk IO, disk space).
|
||||
- VictoriaLogs capacity and performance scales linearly with the available resources (CPU, RAM, disk IO, disk space).
|
||||
It runs smoothly on both Raspberry PI and a server with hundreds of CPU cores and terabytes of RAM.
|
||||
- VictoriaLogs can handle much bigger data volumes than ElasticSearch and Grafana Loki when running on comparable hardware.
|
||||
See [these docs](#benchmarks).
|
||||
- VictoriaLogs supports multitenancy - see [these docs](#multitenancy).
|
||||
- VictoriaLogs supports out of order logs' ingestion aka backfilling.
|
||||
- VictoriaLogs provides simple web UI for querying logs - see [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/#web-ui).
|
||||
|
||||
VictoriaLogs is at Preview stage now. It is ready for evaluation in production and verifying claims given above.
|
||||
It isn't recommended migrating from existing logging solutions to VictoriaLogs Preview in general case yet.
|
||||
@ -35,6 +37,21 @@ vmagent (see [these docs](https://docs.victoriametrics.com/vmagent.html#how-to-c
|
||||
|
||||
VictoriaLogs emits own logs to stdout. It is recommended investigating these logs during troubleshooting.
|
||||
|
||||
## Upgrading
|
||||
|
||||
It is safe upgrading VictoriaLogs to new versions unless [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) say otherwise.
|
||||
It is safe skipping multiple versions during the upgrade unless [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) say otherwise.
|
||||
It is recommended performing regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features.
|
||||
|
||||
It is also safe downgrading to older versions unless [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) say otherwise.
|
||||
|
||||
The following steps must be performed during the upgrade / downgrade procedure:
|
||||
|
||||
* Send `SIGINT` signal to VictoriaLogs process in order to gracefully stop it.
|
||||
See [how to send signals to processes](https://stackoverflow.com/questions/33239959/send-signal-to-process-from-command-line).
|
||||
* Wait until the process stops. This can take a few seconds.
|
||||
* Start the upgraded VictoriaMetrics.
|
||||
|
||||
## Retention
|
||||
|
||||
By default VictoriaLogs stores log entries with timestamps in the time range `[now-7d, now]`, while dropping logs outside the given time range.
|
||||
@ -94,3 +111,14 @@ If `AccountID` and/or `ProjectID` request headers aren't set, then the default `
|
||||
VictoriaLogs has very low overhead for per-tenant management, so it is OK to have thousands of tenants in a single VictoriaLogs instance.
|
||||
|
||||
VictoriaLogs doesn't perform per-tenant authorization. Use [vmauth](https://docs.victoriametrics.com/vmauth.html) or similar tools for per-tenant authorization.
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Here is a [benchmark suite](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/logs-benchmark) for comparing data ingestion performance
|
||||
and resource usage between VictoriaLogs and Elasticsearch.
|
||||
|
||||
It is recommended [setting up VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/QuickStart.html) in production alongside the existing
|
||||
log management systems and comparing resource usage + query performance between VictoriaLogs and your system such as ElasticSearch or Grafana Loki.
|
||||
|
||||
Please share benchmark results and ideas on how to improve benchmarks / VictoriaLogs
|
||||
via [VictoriaMetrics community channels](https://docs.victoriametrics.com/#community-and-contributions).
|
||||
|
@ -17,8 +17,6 @@ The following functionality is planned in the future versions of VictoriaLogs:
|
||||
|
||||
- Support for [data ingestion](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/) from popular log collectors and formats:
|
||||
- Promtail (aka Grafana Loki)
|
||||
- Vector.dev
|
||||
- Fluentbit
|
||||
- Fluentd
|
||||
- Syslog
|
||||
- Add missing functionality to [LogsQL](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html):
|
||||
|
@ -72,7 +72,7 @@ output.elasticsearch:
|
||||
compression_level: 1
|
||||
```
|
||||
|
||||
By default the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy).
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
|
||||
For example, the following `filebeat.yml` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
@ -88,6 +88,9 @@ output.elasticsearch:
|
||||
_stream_fields: "host.name,log.file.path"
|
||||
```
|
||||
|
||||
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
See also:
|
||||
|
||||
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
- [Filebeat `output.elasticsearch` docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
|
||||
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker).
|
||||
|
89
VictoriaLogs/data-ingestion/Fluentbit.md
Normal file
89
VictoriaLogs/data-ingestion/Fluentbit.md
Normal file
@ -0,0 +1,89 @@
|
||||
## Fluentbit setup
|
||||
|
||||
Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
|
||||
|
||||
```conf
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) in the `uri`
|
||||
and inspecting VictoriaLogs logs then:
|
||||
|
||||
```conf
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date&debug=1
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```conf
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date&ignore_fields=log.offset,event.original
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
```
|
||||
|
||||
If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```conf
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
compress gzip
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
|
||||
For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```conf
|
||||
[Output]
|
||||
Name http
|
||||
Match *
|
||||
host localhost
|
||||
port 9428
|
||||
uri /insert/jsonline?_stream_fields=stream&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
json_date_format iso8601
|
||||
header AccountID 12
|
||||
header ProjectID 23
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
- [Fluentbit HTTP output config docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
|
||||
- [Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker).
|
@ -74,7 +74,7 @@ output {
|
||||
}
|
||||
```
|
||||
|
||||
By default the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy).
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
|
||||
For example, the following `logstash.conf` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
@ -95,6 +95,9 @@ output {
|
||||
}
|
||||
```
|
||||
|
||||
The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
See also:
|
||||
|
||||
See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
- [Logstash `output.elasticsearch` docs](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html).
|
||||
- [Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).
|
||||
|
@ -3,11 +3,17 @@
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/) can accept logs from the following log collectors:
|
||||
|
||||
- Filebeat. See [how to setup Filebeat for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html).
|
||||
- Fluentbit. See [how to setup Fluentbit for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html).
|
||||
- Logstash. See [how to setup Logstash for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html).
|
||||
- Vector. See [how to setup Vector for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html).
|
||||
|
||||
The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
|
||||
See also [data ingestion troubleshooting](#troubleshooting) docs.
|
||||
See also:
|
||||
|
||||
- [Log collectors and data ingestion formats](#log-collectors-and-data-ingestion-formats).
|
||||
- [Data ingestion troubleshooting](#troubleshooting).
|
||||
|
||||
|
||||
## HTTP APIs
|
||||
|
||||
@ -21,9 +27,10 @@ VictoriaLogs accepts optional [HTTP parameters](#http-parameters) at data ingest
|
||||
### Elasticsearch bulk API
|
||||
|
||||
VictoriaLogs accepts logs in [Elasticsearch bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
|
||||
format at `http://localhost:9428/insert/elasticsearch/_bulk` endpoint.
|
||||
/ [OpenSearch Bulk API](http://opensearch.org/docs/1.2/opensearch/rest-api/document-apis/bulk/) format
|
||||
at `http://localhost:9428/insert/elasticsearch/_bulk` endpoint.
|
||||
|
||||
The following command pushes a single log line to Elasticsearch bulk API at VictoriaLogs:
|
||||
The following command pushes a single log line to VictoriaLogs:
|
||||
|
||||
```bash
|
||||
echo '{"create":{}}
|
||||
@ -31,7 +38,14 @@ echo '{"create":{}}
|
||||
' | curl -X POST -H 'Content-Type: application/json' --data-binary @- http://localhost:9428/insert/elasticsearch/_bulk
|
||||
```
|
||||
|
||||
The following command verifies that the data has been successfully pushed to VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
|
||||
It is possible to push thousands of log lines in a single request to this API.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) for details on fields,
|
||||
which must be present in the ingested log messages.
|
||||
|
||||
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
|
||||
|
||||
The following command verifies that the data has been successfully ingested to VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
|
||||
|
||||
```bash
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=host.name:host123'
|
||||
@ -43,10 +57,57 @@ The command should return the following response:
|
||||
{"_msg":"cannot open file","_stream":"{}","_time":"2023-06-21T04:24:24Z","host.name":"host123"}
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to debug data ingestion](#troubleshooting).
|
||||
- [HTTP parameters, which can be passed to the API](#http-parameters).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying.html).
|
||||
|
||||
### JSON stream API
|
||||
|
||||
TODO: document JSON stream API
|
||||
VictoriaLogs accepts JSON line stream aka [ndjson](http://ndjson.org/) at `http://localhost:9428/insert/jsonline` endpoint.
|
||||
|
||||
The following command pushes multiple log lines to VictoriaLogs:
|
||||
|
||||
```bash
|
||||
echo '{ "log": { "level": "info", "message": "hello world" }, "date": "2023-06-20T15:31:23Z", "stream": "stream1" }
|
||||
{ "log": { "level": "error", "message": "oh no!" }, "date": "2023-06-20T15:32:10.567Z", "stream": "stream1" }
|
||||
{ "log": { "level": "info", "message": "hello world" }, "date": "2023-06-20T15:35:11.567890+02:00", "stream": "stream2" }
|
||||
' | curl -X POST -H 'Content-Type: application/stream+json' --data-binary @- \
|
||||
'http://localhost:9428/insert/jsonline?_stream_fields=stream&_time_field=date&_msg_field=log.message'
|
||||
```
|
||||
|
||||
It is possible to push unlimited number of log lines in a single request to this API.
|
||||
|
||||
The [timestamp field](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) must be
|
||||
in the [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) format. For example, `2023-06-20T15:32:10Z`.
|
||||
Optional fractional part of seconds can be specified after the dot - `2023-06-20T15:32:10.123Z`.
|
||||
Timezone can be specified instead of `Z` suffix - `2023-06-20T15:32:10+02:00`.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) for details on fields,
|
||||
which must be present in the ingested log messages.
|
||||
|
||||
The API accepts various http parameters, which can change the data ingestion behavior - [these docs](#http-parameters) for details.
|
||||
|
||||
The following command verifies that the data has been successfully ingested into VictoriaLogs by [querying](https://docs.victoriametrics.com/VictoriaLogs/querying/) it:
|
||||
|
||||
```bash
|
||||
curl http://localhost:9428/select/logsql/query -d 'query=log.level:*'
|
||||
```
|
||||
|
||||
The command should return the following response:
|
||||
|
||||
```bash
|
||||
{"_msg":"hello world","_stream":"{stream=\"stream2\"}","_time":"2023-06-20T13:35:11.56789Z","log.level":"info"}
|
||||
{"_msg":"hello world","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:31:23Z","log.level":"info"}
|
||||
{"_msg":"oh no!","_stream":"{stream=\"stream1\"}","_time":"2023-06-20T15:32:10.567Z","log.level":"error"}
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [How to debug data ingestion](#troubleshooting).
|
||||
- [HTTP parameters, which can be passed to the API](#http-parameters).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying.html).
|
||||
|
||||
### HTTP parameters
|
||||
|
||||
@ -104,3 +165,14 @@ VictoriaLogs exposes various [metrics](https://docs.victoriametrics.com/Victoria
|
||||
since the last VictoriaLogs restart. If this metric grows rapidly during extended periods of time, then this may lead
|
||||
to [high cardinality issues](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#high-cardinality).
|
||||
The newly created log streams can be inspected in logs by passing `-logNewStreams` command-line flag to VictoriaLogs.
|
||||
|
||||
## Log collectors and data ingestion formats
|
||||
|
||||
Here is the list of log collectors and their ingestion formats supported by VictoriaLogs:
|
||||
|
||||
| How to setup the collector | Format: Elasticsearch | Format: JSON Stream |
|
||||
|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|---------------------------------------------------------------|
|
||||
| [Filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html) | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) | No |
|
||||
| [Fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html) | No | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) |
|
||||
| [Logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html) | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No |
|
||||
| [Vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html) | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) | No |
|
||||
|
137
VictoriaLogs/data-ingestion/Vector.md
Normal file
137
VictoriaLogs/data-ingestion/Vector.md
Normal file
@ -0,0 +1,137 @@
|
||||
# Vector setup
|
||||
|
||||
Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.toml`
|
||||
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
```
|
||||
|
||||
Substitute the `localhost:9428` address inside `endpoints` section with the real TCP address of VictoriaLogs.
|
||||
|
||||
Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
|
||||
|
||||
See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on parameters specified
|
||||
in the `[sinks.vlogs.query]` section.
|
||||
|
||||
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
|
||||
and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
|
||||
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters)
|
||||
in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
debug = "1"
|
||||
```
|
||||
|
||||
If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
|
||||
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
|
||||
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
ignore_fields = "log.offset,event.original"
|
||||
```
|
||||
|
||||
When Vector ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `batch.max_events` option.
|
||||
For example, the following config is optimized for higher than usual ingestion rate:
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
|
||||
[sinks.vlogs.batch]
|
||||
max_events = 1000
|
||||
```
|
||||
|
||||
If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression = "gzip"` option.
|
||||
This usually allows saving network bandwidth and costs by up to 5 times:
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
compression = "gzip"
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
```
|
||||
|
||||
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
|
||||
If you need storing logs in other tenant, then specify the needed tenant via `[sinks.vlogq.request.headers]` section.
|
||||
For example, the following `vector.toml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
|
||||
|
||||
```toml
|
||||
[sinks.vlogs]
|
||||
inputs = [ "your_input" ]
|
||||
type = "elasticsearch"
|
||||
endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
|
||||
mode = "bulk"
|
||||
api_version = "v8"
|
||||
healthcheck.enabled = false
|
||||
|
||||
[sinks.vlogs.query]
|
||||
_msg_field = "message"
|
||||
_time_field = "timestamp"
|
||||
_stream_fields = "host,container_name"
|
||||
|
||||
[sinks.vlogs.request.headers]
|
||||
AccountID = "12"
|
||||
ProjectID = "34"
|
||||
```
|
||||
|
||||
See also:
|
||||
|
||||
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
|
||||
- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
|
||||
- [Elasticsearch output docs for Vector](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
|
||||
- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker).
|
@ -1,6 +1,15 @@
|
||||
# Querying
|
||||
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/) can be queried at the `/select/logsql/query` endpoint.
|
||||
[VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/) can be queried with [LogsQL](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html)
|
||||
via the following ways:
|
||||
|
||||
- [Web UI](#web-ui) - a web-based UI for querying logs
|
||||
- [HTTP API](#http-api)
|
||||
- [Command-line interface](#command-line)
|
||||
|
||||
## HTTP API
|
||||
|
||||
VictoriaLogs can be queried at the `/select/logsql/query` HTTP endpoint.
|
||||
The [LogsQL](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html) query must be passed via `query` argument.
|
||||
For example, the following query returns all the log entries with the `error` word:
|
||||
|
||||
@ -48,6 +57,30 @@ curl http://localhost:9428/select/logsql/query -H 'AccountID: 12' -H 'ProjectID:
|
||||
The number of requests to `/select/logsql/query` can be [monitored](https://docs.victoriametrics.com/VictoriaLogs/#monitoring)
|
||||
with `vl_http_requests_total{path="/select/logsql/query"}` metric.
|
||||
|
||||
## Web UI
|
||||
|
||||
VictoriaLogs provides a simple Web UI for logs [querying](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html) and exploration
|
||||
at `http://localhost:9428/select/vmui`. The UI allows exploring query results:
|
||||
|
||||
<img src="vmui.png" width="800" />
|
||||
|
||||
There are three modes of displaying query results:
|
||||
|
||||
- `Group` - results are displayed as a table with rows grouped by stream and fields for filtering.
|
||||
- `Table` - displays query results as a table.
|
||||
- `JSON` - displays raw JSON response from [HTTP API](#http-api).
|
||||
|
||||
This is the first version that has minimal functionality. It comes with the following limitations:
|
||||
|
||||
- The number of query results is always limited to 1000 lines. Iteratively add
|
||||
more specific [filters](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html#filters) to the query
|
||||
in order to get full response with less than 1000 lines.
|
||||
- Queries are always executed against [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy) `0`.
|
||||
|
||||
These limitations will be removed in future versions.
|
||||
|
||||
To get around the current limitations, you can use an alternative - the [command line interface](#command-line).
|
||||
|
||||
## Command-line
|
||||
|
||||
VictoriaLogs integrates well with `curl` and other command-line tools during querying because of the following features:
|
||||
|
BIN
VictoriaLogs/querying/vmui.png
Normal file
BIN
VictoriaLogs/querying/vmui.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.0 MiB |
Loading…
Reference in New Issue
Block a user