This should be the way forward when importing libraries in jsonnet. It's
closer to how Go imports look and makes it more obvious where packages
live.
This is not breaking anything, as the old imports were already symlinks
to the now directly used directories.
Signed-off-by: Matthias Loibl <mail@matthiasloibl.com>
* Make FS space alerts thresholds configurable (#1)
This makes it possible to tweak the thresholds for
the NodeFilesystemSpaceFillingUp alerts. Which
might be necessary in systems like Kubernetes,
where the image garbage collector runs at 85%,
so it's not a problem that the disk reaches that usage %.
Signed-off-by: iuri aranda <iuri@skyscrapers.eu>
We actually have to count or sum, respectively, _all_ the selected
metrics for the cluster-wide view. Which means it's easiest to use the
`scalar` approach after all (but only in the cluster dashboard). This
still propagates all the labels.
I have extended the comment for the `nodeExporterSelector` to note
that the cluster dashboard only makes sense if all the selected node
exporter actually belong to the same cluster.
Since this is jsonnet, users can easily disable the cluster
dashboard. Or even create multiple instances of the dashboards with
different `nodeExporterSelector`s for different clusters.
Signed-off-by: beorn7 <beorn@grafana.com>
The `instance:node_memory_swap_io_pages:rate1m` rule was intended to
measure the amount of memory pressure a system is under, but its name is
a bit misleading (it specifically refers to swap), and the rate of
`node_vmstat_pgmajfault` is a better metric for memory pressure
(see #1524).
This commit renames `instance:node_memory_swap_io_pages:rate1m` to
`instance:node_vmstat_pgmajfault:rate1m`, and defines it as
`rate(node_vmstat_pgmajfault{%(nodeExporterSelector)s}[1m])`. The
dashboards are updated accordingly.
Signed-off-by: Benoît Knecht <benoit.knecht@fsfe.org>
As per https://github.com/prometheus/node_exporter/pull/1429#discussion_r304210103
we want to fetch all devices and all fs types.
Currently, this is done by setting empty string which breaks most queries which rely on it.
This fixes it by setting the appropriate selector instead of empty string.
Signed-off-by: Sergiusz Urbaniak <sergiusz.urbaniak@gmail.com>
- Use a stacked graph instead of a gauge as development over time is
especially useful for disk space usage.
- By only taking one metric per device into account, we avoid
double-counting for devices that are mounted multiple times.
Signed-off-by: beorn7 <beorn@grafana.com>
* node-mixin: Improve nodes dashboard
- Use stacking where it makes sense.
- Normalize idle CPU so that stacking is more meaningful.
- Consistently fill where stacking is used but don't fill where not.
- Fix y axis max value for Idle CPU panel.
- Fix y axis min value for memory usage panel.
- Use `$__interval` for range where applicable (and set min step
to 1m).
- Make the right Y axis for disk I/O actually work.
This is just an incremental improvements. It doesn't touch the more
involved TODOs.
Signed-off-by: beorn7 <beorn@grafana.com>
This addresses the blissful scenario where single-node failures are
unproblematic. No reason to wake somebody up if a node is about to
screw itself up by filling the disk.
Signed-off-by: beorn7 <beorn@grafana.com>
- Normalize cluster memory utilisation.
- Fix missing `1m` in memory saturation.
- Have both disk-related row next to each other instead with the
network row in between.
- Correctly render transmit network traffic as negative, using
`seriesOverrides` and `min: null` for the y-axis.
- Make panel and row naming consistent.
- Remove legend where it would just display a single entry with
exactly the title of the panel.
- Fix metric name in individual node CPU Saturation panel.
- Break up disk space utilisation by device in the panel for an
individual node.
NB: All of that doesn't touch any more subtle issues captured in the
various TODOs.
Signed-off-by: beorn7 <beorn@grafana.com>
This will cause a query to be valid even if values of selector are
empty.
Additionally fixing query responsible for disk space usage.
Signed-off-by: paulfantom <pawel@krupa.net.pl>
The only deviation that happened so far is to use format="percentunit"
in a Grafana gauge. This change wasn't even properly used in this repo
so far, so I opted to stick with "upstream" for now. If changes are
really needed, we can try to change upstream first.
Another change was done in parallal here and upstream, but it was
"more correct" in upstream. (Change datasource to $datasource
variable, only partially applied here.) Which is another point for
using the upstream and not copy it here.
Signed-off-by: beorn7 <beorn@grafana.com>
* Replace supervisord xmlrpc library
* Use `github.com/mattn/go-xmlrpc` that doesn't leak goroutines.
* Fix uptime metric
* Use Prometheus best practices for uptime metric.
* Use "start time" rather than "uptime".
* Don't emit a start time if the process is down.
* Add changelog entry.
* Add example compatibility rules.
Signed-off-by: Ben Kochie <superq@gmail.com>
* Add metrics from SNTPv4 packet to ntp collector & add ntpd sanity check
1. Checking local clock against remote NTP daemon is bad idea, local
ntpd acting as a client should do it better and avoid excessive load on
remote NTP server so the collector is refactored to query local NTP
server.
2. Checking local clock against remote one does not check local ntpd
itself. Local ntpd may be down or out of sync due to network issues, but
clock will be OK.
3. Checking NTP server using sanity of it's response is tricky and
depends on ntpd implementation, that's why common `node_ntp_sanity`
variable is exported.
* `govendor add golang.org/x/net/ipv4`, it is dependency of github.com/beevik/ntp
* Update github.com/beevik/ntp to include boring SNTP fix
* Use variable name from RFC5905
* ntp: move code to make export of raw metrics more explicit
* Move NTP math to `github.com/beevik/ntp`
* Make `golint` happy
* Add some brief docs explaining `ntp` #655 and `timex` #664 modules
* ntp: drop XXX comment that got its decision
* ntp: add `_seconds` suffix to relevant metrics
* Better `node_ntp_leap` comment
* s/node_ntp_reftime/node_ntp_reference_timestamp_seconds/ as requested by @discordianfish
* Extract subsystem name to const as suggested by @SuperQ