mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-23 20:37:12 +01:00
lib/promscrape: apply scrape_timeout
on receiving the first response byte for stream_parse: true
scrape targets
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017#issuecomment-767235047
This commit is contained in:
parent
eddba29664
commit
908e35affd
@ -11,6 +11,7 @@ sort: 14
|
||||
Thanks to @johnseekins!
|
||||
|
||||
* BUGFIX: vmagent: properly update `role: endpoints` and `role: endpointslices` scrape targets if the underlying service changes in `kubernetes_sd_config`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240).
|
||||
* BUGFIX: vmagent: apply `scrape_timeout` on receiving the first response byte from `stream_parse: true` scrape targets. Previously it was applied to receiving and *processing* the full response stream. This could result in false timeout errors when scrape target exposes millions of metrics as described [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017#issuecomment-767235047).
|
||||
* BUGFIX: vmstorage: remove empty directories on startup. Such directories can be left after unclean shutdown on NFS storage. Previously such directories could lead to crashloop until manually removed. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1142).
|
||||
|
||||
|
||||
|
@ -118,8 +118,17 @@ func newClient(sw *ScrapeWork) *client {
|
||||
DisableCompression: *disableCompression || sw.DisableCompression,
|
||||
DisableKeepAlives: *disableKeepAlive || sw.DisableKeepAlive,
|
||||
DialContext: statStdDial,
|
||||
|
||||
// Set timeout for receiving the first response byte,
|
||||
// since the duration for reading the full response can be much bigger because of stream parsing.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017#issuecomment-767235047
|
||||
ResponseHeaderTimeout: sw.ScrapeTimeout,
|
||||
},
|
||||
Timeout: sw.ScrapeTimeout,
|
||||
|
||||
// Set 10x bigger timeout than the sw.ScrapeTimeout, since the duration for reading the full response
|
||||
// can be much bigger because of stream parsing.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1017#issuecomment-767235047
|
||||
Timeout: 10 * sw.ScrapeTimeout,
|
||||
}
|
||||
if sw.DenyRedirects {
|
||||
sc.CheckRedirect = func(req *http.Request, via []*http.Request) error {
|
||||
|
Loading…
Reference in New Issue
Block a user