VictoriaMetrics/docs/VictoriaLogs/data-ingestion/Vector.md
n4mine f060b67da5
docs: fix typo in docs/VictoriaLogs/data-ingestion/Vector.md (#7222)
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-10-10 09:45:30 +02:00

6.5 KiB

weight title disableToc menu aliases
20 Vector setup true
docs
parent weight
victorialogs-data-ingestion 20
/VictoriaLogs/data-ingestion/Vector.html
/victorialogs/data-ingestion/Vector.html
/victorialogs/data-ingestion/vector.html

VictoriaLogs supports given below Vector sinks:

Elasticsearch

Specify Elasticsearch sink type in the vector.yaml for sending the collected logs to VictoriaLogs:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    query:
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name

Loki

Specify Loki sink type in the vector.yaml for sending the collected logs to VictoriaLogs:

sinks:
  vlogs:
    type: "loki"
    endpoint: "http://localhost:9428/insert/loki/"
    inputs:
      - your_input
    compression: gzip
    path: /api/v1/push?_msg_field=message.message&_time_field=timestamp&_stream_fields=source
    encoding:
      codec: json
    labels:
      source: vector

Substitute the localhost:9428 address inside endpoints section with the real TCP address of VictoriaLogs.

Replace your_input with the name of the inputs section, which collects logs. See these docs for details.

See these docs for details on parameters specified in the sinks.vlogs.query section.

It is recommended verifying whether the initial setup generates the needed log fields and uses the correct stream fields. This can be done by specifying debug parameter in the sinks.vlogs.query section and inspecting VictoriaLogs logs then:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    query:
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name
      debug: "1"

If some log fields must be skipped during data ingestion, then they can be put into ignore_fields parameter. For example, the following config instructs VictoriaLogs to ignore log.offset and event.original fields in the ingested logs:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    query:
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name
      _ignore_fields: log.offset,event.original

When Vector ingests logs into VictoriaLogs at a high rate, then it may be needed to tune batch.max_events option. For example, the following config is optimized for higher than usual ingestion rate:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    query: 
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name
    batch:
      max_events: 1000

If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via compression = "gzip" option. This usually allows saving network bandwidth and costs by up to 5 times:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    compression: gzip
    query:
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name

By default, the ingested logs are stored in the (AccountID=0, ProjectID=0) tenant. If you need storing logs in other tenant, then specify the needed tenant via sinks.vlogs.request.headers section. For example, the following vector.yaml config instructs Vector to store the data to (AccountID=12, ProjectID=34) tenant:

sinks:
  vlogs:
    inputs:
      - your_input
    type: elasticsearch
    endpoints:
      - http://localhost:9428/insert/elasticsearch/
    mode: bulk
    api_version: v8
    healthcheck:
      enabled: false
    query:
      _msg_field: message
      _time_field: timestamp
      _stream_fields: host,container_name
    request:
      headers:
        AccountID: "12"
        ProjectID: "34"

HTTP

Vector can be configured with HTTP sink type for sending data to JSON stream API:

sinks:
  vlogs:
    inputs:
      - your_input
    type: http
    uri: http://localhost:9428/insert/jsonline?_stream_fields=host,container_name&_msg_field=message&_time_field=timestamp
    encoding:
      codec: json
    framing:
      method: newline_delimited
    healthcheck:
      enabled: false
    request:
      headers:
        AccountID: "12"
        ProjectID: "34"

See also: