: "QUERY"`. QUERY_ALIAS will be used as a `for` label in generated metrics and anomaly scores.
* `writer`
* `datasource_url` - Output destination. An HTTP endpoint that serves `/api/v1/import`.
Here is an example of the config file `vmanomaly_config.yml`.
``` yaml
scheduler:
infer_every: "1m"
fit_every: "2m"
fit_window: "14d"
model:
class: "model.prophet.ProphetModel"
args:
interval_width: 0.98
reader:
datasource_url: "http://victoriametrics:8428/"
queries:
node_cpu_rate: "quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m])"
writer:
datasource_url: "http://victoriametrics:8428/"
monitoring:
pull: # Enable /metrics endpoint.
addr: "0.0.0.0"
port: 8490
```
## 6. vmanomaly output
As the result of running vmanomaly, it produces the following metrics:
- `anomaly_score` - the main one. Ideally, if it is between 0.0 and 1.0 it is considered to be a non-anomalous value. If it is greater than 1.0, it is considered an anomaly (but you can reconfigure that in alerting config, of course),
- `yhat` - predicted expected value,
- `yhat_lower` - predicted lower boundary,
- `yhat_upper` - predicted upper boundary,
- `y` - initial query result value.
Here is an example of how output metric will be written into VictoriaMetrics:
`anomaly_score{for="node_cpu_rate", cpu="0", instance="node-xporter:9100", job="node-exporter", mode="idle"} 0.85`
## 7. vmalert configuration
Here we provide an example of the config for vmalert `vmalert_config.yml`.
``` yaml
groups:
- name: AnomalyExample
rules:
- alert: HighAnomalyScore
expr: 'anomaly_score > 1.0'
labels:
severity: warning
annotations:
summary: Anomaly Score exceeded 1.0. `rate(node_cpu_seconds_total)` is showing abnormal behavior.
```
In the query expression we need to put a condition on the generated anomaly scores. Usually if the anomaly score is between 0.0 and 1.0, the analyzed value is not abnormal. The more anomaly score exceeded 1 the more our model is sure that value is an anomaly.
You can choose your threshold value that you consider reasonable based on the anomaly score metric, generated by vmanomaly. One of the best ways is to estimate it visually, by plotting the `anomaly_score` metric, along with predicted "expected" range of `yhat_lower` and `yhat_upper`. Later in this tutorial we will show an example
## 8. Docker Compose configuration
You can find the `docker-compose.yml` and all configs in this [folder](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/vmanomaly/vmanomaly-vmalert-guide/)
Now we are going to configure the `docker-compose.yml` file to run all needed services.
Here are all services we are going to run:
* vmanomaly - VictoriaMetrics Anomaly Detection service.
* victoriametrics - VictoriaMetrics Time Series Database
* vmagent - is an agent which helps you collect metrics from various sources, relabel and filter the collected metrics and store them in VictoriaMetrics or any other storage systems via Prometheus remote_write protocol.
* [grafana](https://grafana.com/) - visualization tool.
* node-exporter - Prometheus [Node Exporter](https://prometheus.io/docs/guides/node-exporter/) exposes a wide variety of hardware- and kernel-related metrics.
* vmalert - VictoriaMetrics Alerting service.
* alertmanager - Notification services that handles alerts from vmalert.
### Grafana setup
#### Create a data source manifest
In the `provisioning/datasources/` directory, create a file called `datasource.yml` with the following content:
> The default username/password pair is `admin:admin`
``` yaml
apiVersion: 1
datasources:
- name: VictoriaMetrics
type: prometheus
access: proxy
url: http://victoriametrics:8428
isDefault: true
jsonData:
prometheusType: Prometheus
prometheusVersion: 2.24.0
```
#### Define a dashboard provider
In the` provisioning/dashboards/` directory, create a file called `dashboard.yml` with the following content:
``` yaml
apiVersion: 1
providers:
- name: Prometheus
orgId: 1
folder: ''
type: file
options:
path: /var/lib/grafana/dashboards
```
### Scrape config
Let's create `prometheus.yml` file for `vmagent` configuration.
``` yaml
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'vmagent'
static_configs:
- targets: ['vmagent:8429']
- job_name: 'vmalert'
static_configs:
- targets: ['vmalert:8880']
- job_name: 'victoriametrics'
static_configs:
- targets: ['victoriametrics:8428']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'vmanomaly'
static_configs:
- targets: [ 'vmanomaly:8490' ]
```
### vmanomaly licensing
We will utilize the license key stored locally in the file `vmanomaly_license`.
For additional licensing options, please refer to the [VictoriaMetrics Anomaly Detection documentation on licensing](https://docs.victoriametrics.com/anomaly-detection/Overview#licensing).
### Alertmanager setup
Let's create `alertmanager.yml` file for `alertmanager` configuration.
```yml
route:
receiver: blackhole
receivers:
- name: blackhole
```
### Docker-compose
Let's wrap it all up together into the `docker-compose.yml` file.
``` yaml
services:
vmagent:
container_name: vmagent
image: victoriametrics/vmagent:v1.96.0
depends_on:
- "victoriametrics"
ports:
- 8429:8429
volumes:
- vmagentdata-guide-vmanomaly-vmalert:/vmagentdata
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- "--promscrape.config=/etc/prometheus/prometheus.yml"
- "--remoteWrite.url=http://victoriametrics:8428/api/v1/write"
networks:
- vm_net
restart: always
victoriametrics:
container_name: victoriametrics
image: victoriametrics/victoria-metrics:v1.96.0
ports:
- 8428:8428
volumes:
- vmdata-guide-vmanomaly-vmalert:/storage
command:
- "--storageDataPath=/storage"
- "--httpListenAddr=:8428"
- "--vmalert.proxyURL=http://vmalert:8880"
- "-search.disableCache=1" # for guide only, do not use in production
networks:
- vm_net
restart: always
grafana:
container_name: grafana
image: grafana/grafana-oss:10.2.1
depends_on:
- "victoriametrics"
ports:
- 3000:3000
volumes:
- grafanadata-guide-vmanomaly-vmalert:/var/lib/grafana
- ./provisioning/datasources:/etc/grafana/provisioning/datasources
- ./provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./vmanomaly_guide_dashboard.json:/var/lib/grafana/dashboards/vmanomaly_guide_dashboard.json
networks:
- vm_net
restart: always
vmalert:
container_name: vmalert
image: victoriametrics/vmalert:v1.96.0
depends_on:
- "victoriametrics"
ports:
- 8880:8880
volumes:
- ./vmalert_config.yml:/etc/alerts/alerts.yml
command:
- "--datasource.url=http://victoriametrics:8428/"
- "--remoteRead.url=http://victoriametrics:8428/"
- "--remoteWrite.url=http://victoriametrics:8428/"
- "--notifier.url=http://alertmanager:9093/"
- "--rule=/etc/alerts/*.yml"
# display source of alerts in grafana
- "--external.url=http://127.0.0.1:3000" #grafana outside container
# when copypaste the line be aware of '$$' for escaping in '$expr'
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr": },{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]'
networks:
- vm_net
restart: always
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.8.0
depends_on:
- "victoriametrics"
ports:
- "8490:8490"
networks:
- vm_net
restart: always
volumes:
- ./vmanomaly_config.yml:/config.yaml
- ./vmanomaly_license:/license
platform: "linux/amd64"
command:
- "/config.yaml"
- "--license-file=/license"
alertmanager:
container_name: alertmanager
image: prom/alertmanager:v0.25.0
volumes:
- ./alertmanager.yml:/config/alertmanager.yml
command:
- "--config.file=/config/alertmanager.yml"
ports:
- 9093:9093
networks:
- vm_net
restart: always
node-exporter:
image: quay.io/prometheus/node-exporter:v1.7.0
container_name: node-exporter
ports:
- 9100:9100
pid: host
restart: unless-stopped
networks:
- vm_net
volumes:
vmagentdata-guide-vmanomaly-vmalert: {}
vmdata-guide-vmanomaly-vmalert: {}
grafanadata-guide-vmanomaly-vmalert: {}
networks:
vm_net:
```
Before running our docker-compose make sure that your directory contains all required files:
This docker-compose file will pull docker images, set up each service and run them all together with the command:
```
docker-compose up -d
```
To check if vmanomaly is up and running you can check docker logs:
```
docker logs vmanomaly
```
## 9. Model results
To look at model results we need to go to grafana on the `localhost:3000`. Data
vmanomaly need some time to generate more data to visualize.
Let's investigate model output visualization in Grafana.
On the Grafana Dashboard `Vmanomaly Guide` for each mode of CPU you can investigate:
* initial query result - `quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m]))`
* `anomaly_score`
* `yhat` - Predicted value
* `yhat_lower` - Predicted lower boundary
* `yhat_upper` - Predicted upper boundary
Each of these metrics will contain same labels our query `quantile by (mode) (0.5, rate(node_cpu_seconds_total[5m]))` returns.
### Anomaly scores for each metric with its according labels.
Query: `anomaly_score`
Check out if the anomaly score is high for datapoints you think are anomalies. If not, you can try other parameters in the config file or try other model type.
As you may notice a lot of data shows anomaly score greater than 1. It is expected as we just started to scrape and store data and there are not enough datapoints to train on. Just wait for some more time for gathering more data to see how well this particular model can find anomalies. In our configs we put 2 weeks of data needed to fit the model properly.
### Lower and upper boundaries and predicted values.
Queries: `yhat_lower`, `yhat_upper` and `yhat`
Boundaries of 'normal' metric values according to model inference.
### Alerting
On the page `http://localhost:8880/vmalert/groups` you can find our configured Alerting rule:
According to the rule configured for vmalert we will see Alert when anomaly score exceed 1. You will see an alert on Alert tab. `http://localhost:8880/vmalert/alerts`:
## 10. Conclusion
We've explored the integration and practical application of *VictoriaMetrics Anomaly Detection* (`vmanomaly`) in conjunction with `vmalert`. This tutorial has taken you through the necessary prerequisites, setup, and configurations required for anomaly detection in time series data.
Key takeaways include:
1. **Understanding vmanomaly and vmalert**: We've discussed the functionalities of `vmanomaly` and `vmalert`, highlighting how they work individually and in tandem to detect anomalies in time series data.
2. **Practical Configuration and Setup**: By walking through the setup of a docker-compose environment, we've demonstrated how to configure and run VictoriaMetrics along with its associated services, including `vmanomaly` and `vmalert`.
3. **Data Analysis and Monitoring**: The guide provided insights on how to collect, analyze, and visualize data using Grafana, interpreting the anomaly scores and other metrics generated by `vmanomaly`.
4. **Alert Configuration**: We've shown how to set up and customize alerting rules in `vmalert` based on produced anomaly scores, enabling proactive monitoring and timely response to potential issues.
As you continue to use VictoriaMetrics Anomaly Detection and `vmalert`, remember that the effectiveness of anomaly detection largely depends on the appropriateness of the model chosen, the accuracy of configurations and the data patterns observed. This guide serves as a starting point, and we encourage you to experiment with different configurations and models to best suit your specific data needs and use cases. In case you need a helping hand - [contact us](https://victoriametrics.com/contact-us/).