* Updated docker-compose.yml
Added one more Vmstorage to compose and pointed vmselect and vminsert to it.
* Update prometheus.yml
According to requested changes for new vmstorage
* Update docker-compose
Added port mapping
* Updated docker-compose
Set same ports for both storages
* Update prometheus.yml
Changed target for vmstorage1
* Updated docker compose
Changed the flags for vmstorage1
* Update docker compose
Changed flags for vmisert and vmselect to point to vmstorage1
* Update docker compose
Made the vmstorage names uniform
* Update prometheus.ymk
* Update docker-compose.yml
Removed additional flags as they are using default values.
* Update prometheus.yml
Put vmstorage targets under the same job
* Update prometheus.yml
Changed targets format
* Update docker-compose.yml
Set two different volumes for every vmstorage node
* Update deployment/docker/prometheus.yml
* Apply suggestions from code review
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Do not store in memory the response from the last scrape per each target if -promscrape.noStaleMarkers option is enabled.
This should reduce memory usage when the scraped targets return large responses.
Previously, ID for alert entity was generated without alertname or groupname.
This led to collision, when multiple alerting rules within the same group
producing same labelsets. E.g. expr: `sum(metric1) by (job) > 0` and
expr: `sum(metric2) by (job) > 0` could result into same labelset `job: "job"`.
The issue affects only UI and Web API parts of vmalert, because alert ID is used
only for displaying and finding active alerts. It does not affect state restore
procedure, since this label was added right before pushing to remote storage.
The change now adds all extra labels right after receiving response from the datasource.
And removes adding extra labels before pushing to remote storage.
Additionally, change introduces a new flag `Restored` which will be displayed in UI
for alerts which have been restored from remote storage on restart.
* adds tab as second separator for graphite text protocol
* changes indexFunc for indexAny
* Update lib/protoparser/graphite/parser_test.go
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
* feat: add query history
* fix: change detect keyUp for nav query history
* feat: set default query history
* app/vmselect/vmui: `make vmui-update`
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
This should make visible the set flags at flag.Visit(), which is used later for logging
and exporting the `is_set` label for these flags at /metrics page
Commit fixes potential race condition when group update
and generating of ID() happens simultaneously.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Regression was introduced during code refactoring. It potentially
could lead to situation when SIGHUP signals were ignored while
vmalert was still busy with initing group manager.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The following errors:
vendor/cloud.google.com/go/storage/storage.go:1447:53: o.GetCustomerEncryption().GetKeySha256 undefined (type *"google.golang.org/genproto/googleapis/storage/v2".Object_CustomerEncryption has no field or method GetKeySha256)
vendor/cloud.google.com/go/storage/writer.go:439:10: q.GetCommittedSize undefined (type *"google.golang.org/genproto/googleapis/storage/v2".QueryWriteStatusResponse has no field or method GetCommittedSize)
The extra `/` may cause issues when additional path prefixes
are configured. Also, removing it makes it consistent
with the rest of declarations.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
vmctl: properly convert influx bools into integer representation
When using vmctl influx, the import would fail importing boolean fields
with:
```
failed to convert value "some".0 to float64: unexpected value type true
```
This converts `true` to `1` and `false` to `0`.
Fixes#1709
Sort series by a hash calculated from the series labels. This should guarantee "random" selection of the returned time series.
Previously the selection could be biased, since time series were sorted alphabetically by label names and label values.
Stream parsing mode can be automatically enabled when scraping targets with big response bodies
exceeding the -promscrape.minResponseSizeForStreamParse , so it must be always initialized.