* Update golang.org/x/sys/unix
This allows to use simplified string conversion of Utsname members.
* Simplify Utsname string conversion
Use Utsname from golang.org/x/sys/unix which contains byte array
instead of int8/uint8 array members. This allows to simplify the string
conversions of these members.
* Correct buffer_bytes > INT_MAX on BSD/amd64.
The sysctl vfs.bufspace returns either an int or a long, depending on
the value. Large values of vfs.bufspace will result in error messages
like:
couldn't get meminfo: cannot allocate memory
This will detect the returned data type, and cast appropriately.
* Added explicit length checks per feedback.
* Flatten Value() to make it easier to read.
* Simplify per feedback.
* Fix style.
* Doc updates.
The github.com/beevik/ntp package was recently updated with some
API changes that broke node_exporter. This commit fetches the
latest version of the ntp package and brings node_exporter in
line with the latest API.
* Move NodeCollector into package collector
* Refactor collector enabling
* Update README with new collector enabled flags
* Fix out-of-date inline flag reference syntax
* Use new flags in end-to-end tests
* Add flag to disable all default collectors
* Track if a flag has been set explicitly
* Add --collectors.disable-defaults to README
* Revert disable-defaults flag
* Shorten flags
* Fixup timex collector registration
* Fix end-to-end tests
* Change procfs and sysfs path flags
* Fix review comments
This collector is based on adjtimex(2) system call. The collector returns
three values, status if time is synchronised, offset to remote reference,
and local clock frequency adjustment.
Values are taken from kernel time keeping data structures to avoid getting
involved how the synchronisation is implemented. By that I mean one should
not care if time is update using ntpd, systemd.timesyncd, ptpd, and so on.
Since all time sync implementation will always end up telling to kernel what
is the status with time one can simply omit the software in between, and
look results of the syncing. As a positive side effect this makes collector
very quick and conceptually specific, this does not monitor availability of
NTP server, or network in between, or dns resolution, and other unrelated
but necessary things.
Minimum set of values to keep eye on are the following three:
The node_timex_sync_status tells if local clock is in sync with a remote
clock. Value is set to zero when synchronisation to a reliable server
is lost, or a time sync software is misconfigured.
The node_timex_offset_seconds tells how much local clock is off when
compared to reference. In case of multiple time references this value
is outcome of RFC 5905 adjustment algorithm. Ideally offset should be
close to zero, and it depends about use case how large value is
acceptable. For example a typical web server is probably fine if offset
is about 0.1 or less, but that would not be good enough for mobile phone
base station operator.
The node_timex_freq tells amount of adjustment to local clock tick
frequency. For example if offset is one second and growing the local
clock will need instruction to tick quicker. Number value itself is not
very important, and occasional small adjustments are fine. When
frequency is unusually in stable one can assume quality of time stamps
will not be accurate to very far in sub second range. Obviously
explaining why local clock frequency behaves like a passenger in roller
coaster is different matter. Explanations can vary from system load, to
environmental issues such as a machine being physically too hot.
Rest of the measurements can help when debugging. If you run a clock server
do probably want to collect and keep track of everything.
Pull-request: https://github.com/prometheus/node_exporter/pull/664
* Add metrics from SNTPv4 packet to ntp collector & add ntpd sanity check
1. Checking local clock against remote NTP daemon is bad idea, local
ntpd acting as a client should do it better and avoid excessive load on
remote NTP server so the collector is refactored to query local NTP
server.
2. Checking local clock against remote one does not check local ntpd
itself. Local ntpd may be down or out of sync due to network issues, but
clock will be OK.
3. Checking NTP server using sanity of it's response is tricky and
depends on ntpd implementation, that's why common `node_ntp_sanity`
variable is exported.
* `govendor add golang.org/x/net/ipv4`, it is dependency of github.com/beevik/ntp
* Update github.com/beevik/ntp to include boring SNTP fix
* Use variable name from RFC5905
* ntp: move code to make export of raw metrics more explicit
* Move NTP math to `github.com/beevik/ntp`
* Make `golint` happy
* Add some brief docs explaining `ntp` #655 and `timex` #664 modules
* ntp: drop XXX comment that got its decision
* ntp: add `_seconds` suffix to relevant metrics
* Better `node_ntp_leap` comment
* s/node_ntp_reftime/node_ntp_reference_timestamp_seconds/ as requested by @discordianfish
* Extract subsystem name to const as suggested by @SuperQ
* cpu: Metric 'package_throttles_total' is per package.
'package_throttles_total' is per package, not per cpu. This also reduces
the total number of cpu time series a lot (esp for multi core cpus).
* cpu: Better handling of a cpulist edge-case.
* cpu: Extract the package number from the directory name.
Do not rely on the range index.
* cpu: Add package_throttle_count for node0 cpu1
This file must be ignored by the cpu collector.
This avoids issues with integer overflows on 32-bit architectures. The
Prometheus data format is float64, so regardless of the architecture we
should handle large numbers.
Fixes#629.
* Add bcache collector for Linux
This collector gathers metrics related to the Linux block cache
(bcache) from sysfs.
* Removed commented out code
* Use project comment style
* Add _sectors to metric name to indicate unit
* Really use project comment style
* Rename bcache.go to bcache_linux.go
* Keep collector namespace clean
Rename:
- metric -> bcacheMetric
- periodStatsToMetrics -> bcachePeriodStatsToMetric
* Shorten slice initialization
* Change label names to backing_device, cache_device
* Remove five minute metrics (keep only total)
* Include units in additional metric names
* Enable bcache collector by default
* Provide metrics in seconds, not nanoseconds
* remove metrics with label "all"
* Add fixtures, update end-to-end for bcache collector
* Move fixtures/sys into tar.gz
This changeset moves the collector/fixtures/sys directory into
collector/fixtures/sys.tar.gz and tweaks the Makefile to unpack the
tarball before tests are run.
The reason for this change is that Windows does not allow colons in a
path (colons are present in some of the bcache fixture files), nor can
it (out of the box) deal with pathnames longer than 260 characters
(which we would be increasingly likely to hit if we tried to replace
colons with longer codes that are guaranteed not the turn up in regular
file names).
* Add ttar: plain text archive, replacement for tar
This changeset adds ttar, a plain text replacement for tar, and uses it
for the sysfs fixture archive. The syntax is loosely based on tar(1).
Using a plain text archive makes it possible to review changes without
downloading and extracting the archive. Also, when working on the repo,
git diff and git log become useful again, allowing a committer to verify
and track changes over time.
The code is written in bash, because bash is available out of the box on
all major flavors of Linux and on macOS. The feature set used is
restricted to bash version 3.2 because that is what Apple is still
shipping.
The programm also works on Windows if bash is installed. Obviously, it
does not solve the Windows limitations (path length limited to 260
characters, no symbolic links) that prompted the move to an archive
format in the first place.
* Add diskstats collector for Darwin
* Update year in the header
* Update README.md
* Add github.com/lufia/iostat to vendored packages
* Change stats to follow naming guidelines
* Add a entry of github.com/lufia/iostat into vendor.json
* Remove /proc/diskstats from description
* Add qdisc collector for Linux
This collector gathers basic queueing discipline metrics via netlink,
similarly to what `tc -s qdisc show` does.
* qdisc collector: nl-specific code moved, names fixed
- netlink-specific parts moved to github.com/ema/qdisc
- avoid using shortened names
- counters renamed into XXX_total
* Get rid of parseMessage error checking leftover
* Add github.com/ema/qdisc to vendored packages
* Update help texts and comments
* Add qdisc collector to README file
* qdisc collector end-to-end testing
* Update qdisc dependency to latest version
Update github.com/ema/qdisc dependency to revision 2c7e72d, which
includes unit testing.
* qdisc collector: rename "iface" label into "device"
According to Mellanox, it is standard practice that the port_xmit_data and port_rcv_data
files are split into 4 lanes. To get the actual transmit and receive values for each
port, the metric needs to be multiplied by 4.
Signed-Off-By: Robert Clark <robert.d.clark@hpe.com>
* silently ignore nonexisting bonding_masters file
Add an empty fixtures dir without a bonding_masters file to test.
* Moved the check to the Update() method
Dropped the empty test dir.
Since Go 1.8 32bit MIPS Big/Little Endian are supported assuming the
target runs Linux and the kernel either emulates an FPU or can access
the CPU one.
This allows the node_collector to build for mips and mipsle opening up
the possibility of running it on things like home routers
(DD-|Open|ASUS-)Wrt firmware usually has the necessary bits in place.
* Implement commonalities and linux support for ARP collection
* Add ARP collector to fixtures and run as part of e2e tests
* Bubble up scanner errors
* Use single return values where it makes sense
* Add missing annotation
* Move arp_common into arp_linux
* Add license header to arp_linux.go
* Address initial feedback
* Use strings.Fields instead of strings.Split
* Deal with scanner.Err() rather than throwing away errors
* Check for scan errors in-line before interacting with the entries map
* Don't interact with potentially empty text from scan
* Check for scan errors outside the scan loop
* Add comment about moving procfs parsing
* Add more direct comment
* Update initialism style to match go style guide
* Put function args on the same line
* Add TODO in front of comment about procfs extraction
* Guard against strings.Fields returning an empty slice
* Be more defensive about ARP table format and use upcase more broadly
* Enable the ARP collector by default
* Add ARP collector to the README
* Remove 'entry'
Instead of maintaining a counter metric for device errors in memory,
this change exports a gauge and uses const metrics to avoid leaking
metrics for unmounted filesystems.
Older versions of the OFED drivers contain 64-bit variants of the port counters and are located in a directory named 'counters_ext'. This patch includes these older metrics that have since been deprecated with OFED 4.0.
Signed-Off-By: Robert Clark <robert.d.clark@hpe.com>
In case a metric file within the InfiniBand collector doesn't exist, skip the metric in order to allow collection of the remaining valid InfiniBand metrics.
Signed-Off-By: Robert Clark <robert.d.clark@hpe.com>
Named return variables should only be used to describe the returned type
further, e.g. `err error` doesn't add any new information and is just
stutter.
Add new metrics for the InfiniBand network protocol including the amount of packets sent and received, the number of times the link has been downed and how many times the link has recovered from an error state.
Signed-Off-By: Robert Clark <robert.d.clark@hpe.com>
Removed all global types that were unnecessary, and refactored to use constructor-created values and inline values instead of globals.
Signed-Off-By: Joe Handzik <joseph.t.handzik@hpe.com>
This also involves removing zfs_zpool code for now.
Signed-Off-By: Corey Stewart <stewa169@purdue.edu>
Signed-Off-By: Joe Handzik <joseph.t.handzik@hpe.com>
This patch makes stylistic changes to error strings, unexports method names by lower casing them, removes unused dataSetMetric, and adds copyright/licence information.
Signed-Off-By: Corey Stewart <stewa169@purdue.edu>
It is tested on FreeBSD 10.2-RELEASE and Linux (ZFS on Linux 0.6.5.4).
On FreeBSD, Solaris, etc. ZFS metrics are exposed through sysctls.
ZFS on Linux exposes the same metrics through procfs `/proc/spl/...`.
In addition to sysctl metrics, 'computed metrics' are exposed by
the collector, which are based on several sysctl values.
There is some conditional logic involved in computing these metrics
which cannot be easily mapped to PromQL.
Not all 92 ARC sysctls are exposed right now but this can be changed
with one additional LOC each.
The devstat API expects us to reuse one devinfo for many invocations of
devstat_getstats. In particular, it allocates and resizes memory
referenced by devinfo.
Querying the number of devices separately from the device list itself is
racy. Devices may be added or removed between the two calls; and removed
devices would lead to a segfault.
The memory allocated by calloc was never freed. Since the devinfo struct
never leaves the function, anyway, we might as well just allocate it on
the stack.
It seems solaris prefers "sys/loadavg.h" over "stdlib.h" when
fetching the load average.
For Illumos based OSes it was required to include "sys/time.h" to
ensure that "hrtime_t" was defined.
https://www.illumos.org/issues/6002
It also required setting the ldflags "-fno-stack-protector -lssp" to
avoid undefined symbols when linking with gcc.
/opt/local/go/pkg/tool/solaris_amd64/link: running gcc failed: exit status 1
Undefined first referenced
symbol in file
__stack_chk_fail /tmp/go-link-138622936/000002.o
__stack_chk_guard /tmp/go-link-138622936/000002.o
Instead of doing the whole metric exposition in a platform specific collector
implementation, this creates and updates the metrics in meminfo.go and
expected a platform specific implementation of getMemInfo on
*meminfoCollector.
This removes some error handling, which should be fine. If the calls
fail, we will get the zeroes, which is a safe enough fallback.
Additionally, if the first sysctl (page_size) succeeded it is unlikely
that other ones will fail.
node_exporter currently triggers autofs to mount the underlying
filesystem on every scrape. This is undesirable. Better ignore autofs.
The underlying filesystem that autofs mounts will be monitored though,
when the (real) filesystem is mounted.
They get printed all the time, as there are some tokens in the /proc
file that we simply don't support. It's better to keep these as
debugging messages, which may come in useful if new tags start to
appear.
- Use the right number of printf() arguments. Use %q where it makes sense.
- Use "DRBD" instead of "Drbd", per Go's style guide.
- Add _total suffixes to counter metrics.
- Mention the unit (bytes) in documentation strings once more.
This collector exposes most of the useful information that can be found
in /proc/drbd. Sizes are normalised to be in bytes, as /proc/drbd uses
kibibytes.
This change adds a new collector called "nfs" that parses the contents
of /proc/net/rpc/nfs and turns it into metrics. It can be used to
inspect the number of operations per type, but also to keep an eye on an
extraneous number of retransmissions, which may indicate connectivity
issues.
I've picked the name "nfs", as most operating systems use "nfs" for the
client component and "nfsd" as the server component. If we want to add
stats for the NFS server as well, we'd better call such a collector
"nfsd".
The chip label generation has been changed in #334 to prefer the
unique device path (e.g. the location on the PCI bus) due to #333.
Here, a new annotation metric ``node_hwmon_chip_names`` is
introduced which allows to link the unique chip sysfs path to a
human-readable chip name which may not be unique among chip sysfs
paths (for example, dual-slot systems have multiple
chipType="coretemp" sensors).
This allows to mitigate the downsides of the solution to #333
(namely that the device path may not be stable across kernels and
reboots) for cases where it does not matter that multiple devices
may have the same human-readable name (e.g. aggregation or where
at most one device with a common chip name is present).
For cases where no human-readable name can be derived, the
annotation metric is not emitted.
We seem to have a small number of Linux servers here that have lines in
/proc/mdstat that cannot be parsed by the node exporter, due to them
containing attributes that are not matched by the regular expression
("super 1.2").
Extend the regular expression to skip this data, just like we do for all
of the other status lines.
* Prefer device path based names over exported names
For some sensors (like coretemp) it is possible that multiple
instances exist, thus base the name on the device path and not on
the exported name.
* Update end-to-end test for dual socket machines
Explicitly have 2 coretemp instances with a symlink for the device
such that the hwmon collector must pick that name (or fail)
* Add Linux NUMA "numastat" metrics
Read the `numastat` metrics from /sys/devices/system/node/node* when reading NUMA meminfo metrics.
* Update end-to-end test output.
* Add `numastat` metrics as counters.
* Add tests for error conditions.
* Refactor meminfo numa metrics struct
* Refactor meminfoKey into a simple struct of metric data.
This makes it easier to pass slices of metrics around.
* Refactor tests.
* Fixup: Add suggested fixes.
* Fixup: More fixes
* Add another scanner.Err() return
* Add "_total" to counter metrics.
* Add hwmon support (mainly known from lm-sensors)
This commit adds initial support for linux hardware sensors, exported
through sysfs.
Details of the interface can be found at
https://www.kernel.org/doc/Documentation/hwmon/sysfs-interface
* Add end-to-end test with some real life data
* Cleanup comments on hwmon collector
* Drop raw sensor name from hwmon output
* Let the sensor label be "sensor"
* Add hwmon short description to README.
The correct frequency is the systimer frequency,
not the stathz.
From one of the DragonFly developers:
The bump upon each statclock is:
((cur_systimer - prev_systimer) * systimer_freq) >> 32
systimer_freq can be extracted from following
sysctl in userspace:
sysctl kern.cputimer.freq
The convention of the linux driver is nvme($device)n($namespace)p($partition). On *bsd it seems to be different, using "ns" instead of "n" as the namespace separator.
Previously the raw time difference was used which includes the network trip time
between the node and the ntp server. This makes setting alerts off the value
troublesome as it depends on the latency as well as the clock offset.
logind provides a nice interface to find out about the numbers of sessions
on a system; it is used on most Linux distributions, even those which
aren't using systemd.
The exporter exposes the total number of sessions indexed by the following
attributes:
* seat
* type ("tty", "x11", ...)
* class ("user", "greeter", ...)
* remote ("true"/"false")
This removes the requirement to run `node_exporter` as root or with read
access to `/dev/kmem` in order to get CPU usage statistics.
Once FreeBSD adds a macro for the `kern.cp_times` sysctl, the
`setupSysctlMIBs()` function should be replaced by usage of the macro.
When compiling `20ecedd0b4c983bd7b88f97cd7a21461988a6c12` with GNU make (`gmake`) on FreeBSD 10.2-RELEASE, I get the following error:
```
collector/filesystem_bsd.go:60: non-bool mnt[i].f_flags & MNT_RDONLY (type C.uint64_t) used as if condition
Makefile.COMMON:85: recipe for target 'node_exporter' failed
gmake: *** [node_exporter] Error 2
```
This problem is fixed by this patch.
It turns out, on some kernels (notably - CentOS6) there is an empty line
inserted at the beginning of /sys/devices/system/node/node*/meminfo
files. The leads to node_exporter crash on such kernels.
Fix this by checking for empty string first.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>
Add new collector which exposes the content of /sys/kernel/mm/ksm
directory. This directory contains control and statistics files for
Kernel Samepage Merging daemon.
The collector is not enabled by default.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>
Entry collector uses readUintFromFile() function which is defined by
conntrack collector. Thus, it is impossible to build node_exporter w/o
conntrack collector. Fix this by factoring out the function into
helper.go file.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>
It is sometimes useful to understand the distribution of free/occupied
memory between NUMA nodes to deal with performance problems. To do so,
add new meminfo_numa collector that enables exporting of per node
statistics along with unit and end-to-end tests for it.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>
Removes unused signal handlers left over from signal based collection
and block the non windows-relevant collectors loadavg and interrupts.
Signal based collection removed in 1c17481a42.
As OS X doesn't have it's own interrupts provider, don't build
interrupts_common on OS X as well. Otherwise build fails, because
interrupts_common depends on variables provided by platform-specific
files.
Signed-off-by: Pavel Borzenkov <pavel.borzenkov@gmail.com>