If you hit some issue or have some question about VictoriaMetrics components,
then please follow the following steps in order to quickly find the solution:
1. Check the version of VictoriaMetrics component, which needs to be troubleshot and compare
it to [the latest available version](https://docs.victoriametrics.com/CHANGELOG.html).
If the used version is lower than the latest available version, then there are high chances
that the issue is already resolved in newer versions. Carefully read [the changelog](https://docs.victoriametrics.com/CHANGELOG.html)
between your version and the latest version and check whether the issue is already fixed there.
If the issue is already fixed in newer versions, then upgrade to the newer version and verify whether the issue is fixed:
- [How to upgrade single-node VictoriaMetrics](https://docs.victoriametrics.com/#how-to-upgrade-victoriametrics)
- [How to upgrade VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#updating--reconfiguring-cluster-nodes)
Upgrade procedure for other VictoriaMetrics components is as simple as gracefully stopping the component
by sending `SIGINT` signal to it and starting the new version of the component.
There may be breaking changes between different versions of VictoriaMetrics components in rare cases.
These cases are documented in [the changelog](https://docs.victoriametrics.com/CHANGELOG.html).
So please read the changelog before the upgrade.
1. Check for logs in VictoriaMetrics components. They may contain useful information about the issue.
If the log message doesn't have enough useful information for troubleshooting the issue,
then search the log message in Google. There are high chances that the issue is already reported
somewhere (docs, StackOverflow, Github issues, etc.) indexed by Google and the solution is already documented there.
1. If VictoriaMetrics logs have no relevant information, then try searching for the issue in Google
via multiple keywords and phrases specific to the issue. There are high chances that the issue
and the solution is already documented somewhere.
1. Try searching for the issue at [VictoriaMetrics Github](https://github.com/VictoriaMetrics/VictoriaMetrics/issues)
The sinal/noise quality of search results here is much lower than in Google, but sometimes it may help
finding the relevant information about the issue when Google fails to find the needed information.
If you located the relevant Github issue, but it misses some information on how to diagnose or troubleshoot
the issue, then please provide this information in comments to the issue. This increases chances
that the issue will be resolved soon.
1. Try searching for information about the issue in [VictoriaMetrics source code](https://github.com/search?q=repo%3AVictoriaMetrics%2FVictoriaMetrics&type=code).
Github code search may be not very good in some cases, so it is recommended [checking out VictoriaMetrics source code](https://github.com/VictoriaMetrics/VictoriaMetrics/)
and perform local search in the checked out code.
Note that the source code for VictoriaMetrics cluster is located in [the cluster](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster) branch.
1. Try searching for information about the issue in the history of [VictoriaMetrics Slack chat](https://victoriametrics.slack.com).
There are non-zero chances that somebody already stuck with the same issue and documented the solution at Slack.
1. If steps above didn't help finding the solution to the issue, then please [file a new issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/new/choose)
by providing the maximum details on how to reproduce the issue.
After that you can post the link to the issue to [VictoriaMetrics Slack chat](https://victoriametrics.slack.com),
so VictoriaMetrics community could help finding the solution to the issue. It is better filing the issue at VictoriaMetrics Github
before posting your question to VictoriaMetrics Slack chat, since Github issues are indexed by Google,
while Slack messages aren't indexed by Google. This simplifies searching for the solution to the issue for future VictoriaMetrics users.
1. Pro tip 1: if you see that [VictoriaMetrics docs](https://docs.victoriametrics.com/) contain incomplete or incorrect information,
then please create a pull request with the relevant changes. This will VictoriaMetrics community.
All the docs published at `https://docs.victoriametrics.com` are located in the [docs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/docs)
folder inside VictoriaMetrics repository.
1. Pro tip 2: please provide links to existing docs / Github issues / StackOverflow questions
instead of copy-n-pasting the information from these sources when asking or answering questions
from VictoriaMetrics community. If the linked resources have no enough information,
then it is better posting the missing information in the web resource before providing links
to this information in Slack chat. This will simplify searching for this information in the future
for VictoriaMetrics users via Google and ChatGPT :)
1. Pro tip 3: if you are answering somebody's question about VictoriaMetrics components
at Github issues / Slack chat / StackOverflow, then the best answer is a direct link to the information
regaring the question.
The better answer is a concise message with multiple links to the relevant information.
The worst answer is a message with misleading or completely wrong information.
1. Pro tip 4: if you can fix the issue on yourself, then please do it and provide the corresponding pull request!
We are glad to get pull requests from VictoriaMetrics community.
If you migrate from InfluxDB, then pass `-search.setLookbackToStep` command-line flag to single-node VictoriaMetrics
or to `vmselect` in VictoriaMetrics cluster. See also [how to migrate from InfluxDB to VictoriaMetrics](https://docs.victoriametrics.com/guides/migrate-from-influx.html).
3. Sometimes response caching may lead to unexpected results when samples with older timestamps
are ingested into VictoriaMetrics (aka [backfilling](https://docs.victoriametrics.com/#backfilling)).
Try disabling response cache and see whether this helps. This can be done in the following ways:
- By passing `-search.disableCache` command-line flag to a single-node VictoriaMetrics
or to all the `vmselect` components if cluster version of VictoriaMetrics is used.
- By passing `nocache=1` query arg to every request to `/api/v1/query` and `/api/v1/query_range`.
If you use Grafana, then this query arg can be specified in `Custom Query Parameters` field
at Prometheus datasource settings - see [these docs](https://grafana.com/docs/grafana/latest/datasources/prometheus/) for details.
4. If you use cluster version of VictoriaMetrics, then it may return partial responses by default
when some of `vmstorage` nodes are temporarily unavailable - see [cluster availability docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-availability)
for details. If you want prioritizing query consistency over cluster availability,
then you can pass `-search.denyPartialResponse` command-line flag to all the `vmselect` nodes.
In this case VictoriaMetrics returns an error during querying if at least a single `vmstorage` node is unavailable.
Another option is to pass `deny_partial_response=1` query arg to `/api/v1/query` and `/api/v1/query_range`.
If you use Grafana, then this query arg can be specified in `Custom Query Parameters` field
at Prometheus datasource settings - see [these docs](https://grafana.com/docs/grafana/latest/datasources/prometheus/) for details.
5. If you pass `-replicationFactor` command-line flag to `vmselect`, then it is recommended removing this flag from `vmselect`,
since it may lead to incomplete responses when `vmstorage` nodes contain less than `-replicationFactor`
copies of the requested data.
6. Try upgrading to the [latest available version of VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
and verifying whether the issue is fixed there.
7. Try executing the query with `trace=1` query arg. This enables query tracing, which may contain
useful information on why the query returns unexpected data. See [query tracing docs](https://docs.victoriametrics.com/#query-tracing) for details.
8. Inspect command-line flags passed to VictoriaMetrics components. If you don't understand clearly the purpose
or the effect of some flags, then remove them from the list of flags passed to VictoriaMetrics components,
because some command-line flags may change query results in unexpected ways when set to improper values.
VictoriaMetrics is optimized for running with default flag values (e.g. when they aren't set explicitly).
9. If the steps above didn't help identifying the root cause of unexpected query results,
then [file a bugreport](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/new) with details on how to reproduce the issue.
## Slow data ingestion
There are the following most commons reasons for slow data ingestion in VictoriaMetrics:
1. Memory shortage for the given amounts of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series).
in the internal index (aka `indexdb`), so it can be quickly located on subsequent select queries.
The process of registering new time series in the internal index is an order of magnitude slower
than the process of adding new sample to already registered time series.
So VictoriaMetrics may work slower than expected under [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
The [official Grafana dashboards for VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
provides `Churn rate` graph, which shows the average number of new time series registered
during the last 24 hours. If this number exceeds the number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series),
then you need to identify and fix the source of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
Issues like this are very hard to catch via [official Grafana dashboard for cluster version of VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
and proper diagnosis would require checking resource usage on the instances where VictoriaMetrics runs.
6. If you see `TooHighSlowInsertsRate` [alert](https://docs.victoriametrics.com/#monitoring) when single-node VictoriaMetrics or `vmstorage` has enough
free CPU and RAM, then increase `-cacheExpireDuration` command-line flag at single-node VictoriaMetrics or at `vmstorage` to the value,
which exceeds the interval between ingested samples for the same time series (aka `scrape_interval`).
See [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3976#issuecomment-1476883183) for more details.
with more CPU and RAM should help improving speed for slow queries. Query performance
is always limited by resources of one vmselect which processes the query. For example, if 2vCPU cores on `vmselect`
isn't enough to process query fast enough, then migrating `vmselect` to a machine with 4vCPU cores should increase heavy query performance by up to 2x.
If the line on `Concurrent select` graph form the [official Grafana dashboard for VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
is close to the limit, then prefer adding more `vmselect` nodes to the cluster.
For more context, see [How to optimize PromQL and MetricsQL queries](https://valyala.medium.com/how-to-optimize-promql-and-metricsql-queries-85a1b75bf986).
VictoriaMetrics also provides [query tracer](https://docs.victoriametrics.com/#query-tracing)