node_exporter currently triggers autofs to mount the underlying
filesystem on every scrape. This is undesirable. Better ignore autofs.
The underlying filesystem that autofs mounts will be monitored though,
when the (real) filesystem is mounted.
They get printed all the time, as there are some tokens in the /proc
file that we simply don't support. It's better to keep these as
debugging messages, which may come in useful if new tags start to
appear.
Collect metrics from the StorCLI utility on the health of MegaRAID
hardware RAID controllers and write them to stdout so that they can be
used by the textfile collector.
We parse the JSON output that StorCLI provides.
Script must be run as root or with appropriate capabilities for storcli
to access the RAID card.
Designed to run under Python 2.7, using the system Python provided with
many Linux distributions.
The metrics look like this:
mbostock@host:~$ sudo ./storcli.py
megaraid_status_code 0
megaraid_controllers_count 1
megaraid_emergency_hot_spare{controller="0"} 1
megaraid_scheduled_patrol_read{controller="0"} 1
megaraid_virtual_drives{controller="0"} 1
megaraid_drive_groups{controller="0"} 1
megaraid_virtual_drives_optimal{controller="0"} 1
megaraid_degraded{controller="0"} 0
megaraid_battery_backup_healthy{controller="0"} 1
megaraid_ports{controller="0"} 8
megaraid_failed{controller="0"} 0
megaraid_drive_groups_optimal{controller="0"} 1
megaraid_healthy{controller="0"} 1
megaraid_physical_drives{controller="0"} 24
megaraid_controller_info{controller="0", model="AVAGOMegaRAIDSASPCIExpressROMB"} 1
mbostock@host:~$
- Use the right number of printf() arguments. Use %q where it makes sense.
- Use "DRBD" instead of "Drbd", per Go's style guide.
- Add _total suffixes to counter metrics.
- Mention the unit (bytes) in documentation strings once more.
This collector exposes most of the useful information that can be found
in /proc/drbd. Sizes are normalised to be in bytes, as /proc/drbd uses
kibibytes.
This change adds a new collector called "nfs" that parses the contents
of /proc/net/rpc/nfs and turns it into metrics. It can be used to
inspect the number of operations per type, but also to keep an eye on an
extraneous number of retransmissions, which may indicate connectivity
issues.
I've picked the name "nfs", as most operating systems use "nfs" for the
client component and "nfsd" as the server component. If we want to add
stats for the NFS server as well, we'd better call such a collector
"nfsd".
The chip label generation has been changed in #334 to prefer the
unique device path (e.g. the location on the PCI bus) due to #333.
Here, a new annotation metric ``node_hwmon_chip_names`` is
introduced which allows to link the unique chip sysfs path to a
human-readable chip name which may not be unique among chip sysfs
paths (for example, dual-slot systems have multiple
chipType="coretemp" sensors).
This allows to mitigate the downsides of the solution to #333
(namely that the device path may not be stable across kernels and
reboots) for cases where it does not matter that multiple devices
may have the same human-readable name (e.g. aggregation or where
at most one device with a common chip name is present).
For cases where no human-readable name can be derived, the
annotation metric is not emitted.