2020-02-18 13:27:11 +01:00
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
2015-09-26 20:54:49 +02:00
# TYPE go_gc_duration_seconds summary
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
2017-10-05 16:20:47 +02:00
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
2015-10-16 21:45:02 +02:00
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
2017-10-05 16:20:47 +02:00
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
2015-10-16 21:45:02 +02:00
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
2017-01-06 12:36:26 +01:00
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
2015-10-16 21:45:02 +02:00
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
2017-01-06 12:36:26 +01:00
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
2015-10-16 21:45:02 +02:00
# TYPE go_memstats_sys_bytes gauge
2017-10-05 16:20:47 +02:00
# HELP go_threads Number of OS threads created.
2017-02-28 22:59:37 +01:00
# TYPE go_threads gauge
2017-04-11 17:45:19 +02:00
# HELP node_arp_entries ARP entries by device
# TYPE node_arp_entries gauge
node_arp_entries{device="eth0"} 3
node_arp_entries{device="eth1"} 3
Add bcache collector (#597)
* Add bcache collector for Linux
This collector gathers metrics related to the Linux block cache
(bcache) from sysfs.
* Removed commented out code
* Use project comment style
* Add _sectors to metric name to indicate unit
* Really use project comment style
* Rename bcache.go to bcache_linux.go
* Keep collector namespace clean
Rename:
- metric -> bcacheMetric
- periodStatsToMetrics -> bcachePeriodStatsToMetric
* Shorten slice initialization
* Change label names to backing_device, cache_device
* Remove five minute metrics (keep only total)
* Include units in additional metric names
* Enable bcache collector by default
* Provide metrics in seconds, not nanoseconds
* remove metrics with label "all"
* Add fixtures, update end-to-end for bcache collector
* Move fixtures/sys into tar.gz
This changeset moves the collector/fixtures/sys directory into
collector/fixtures/sys.tar.gz and tweaks the Makefile to unpack the
tarball before tests are run.
The reason for this change is that Windows does not allow colons in a
path (colons are present in some of the bcache fixture files), nor can
it (out of the box) deal with pathnames longer than 260 characters
(which we would be increasingly likely to hit if we tried to replace
colons with longer codes that are guaranteed not the turn up in regular
file names).
* Add ttar: plain text archive, replacement for tar
This changeset adds ttar, a plain text replacement for tar, and uses it
for the sysfs fixture archive. The syntax is loosely based on tar(1).
Using a plain text archive makes it possible to review changes without
downloading and extracting the archive. Also, when working on the repo,
git diff and git log become useful again, allowing a committer to verify
and track changes over time.
The code is written in bash, because bash is available out of the box on
all major flavors of Linux and on macOS. The feature set used is
restricted to bash version 3.2 because that is what Apple is still
shipping.
The programm also works on Windows if bash is installed. Obviously, it
does not solve the Windows limitations (path length limited to 260
characters, no symbolic links) that prompted the move to an archive
format in the first place.
2017-07-07 07:20:18 +02:00
# HELP node_bcache_active_journal_entries Number of journal entries that are newer than the index.
# TYPE node_bcache_active_journal_entries gauge
node_bcache_active_journal_entries{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 1
# HELP node_bcache_average_key_size_sectors Average data per key in the btree (sectors).
# TYPE node_bcache_average_key_size_sectors gauge
node_bcache_average_key_size_sectors{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_btree_cache_size_bytes Amount of memory currently used by the btree cache.
# TYPE node_bcache_btree_cache_size_bytes gauge
node_bcache_btree_cache_size_bytes{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_btree_nodes Total nodes in the btree.
# TYPE node_bcache_btree_nodes gauge
node_bcache_btree_nodes{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_btree_read_average_duration_seconds Average btree read duration.
# TYPE node_bcache_btree_read_average_duration_seconds gauge
node_bcache_btree_read_average_duration_seconds{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 1.305e-06
# HELP node_bcache_bypassed_bytes_total Amount of IO (both reads and writes) that has bypassed the cache.
# TYPE node_bcache_bypassed_bytes_total counter
node_bcache_bypassed_bytes_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
2018-04-09 17:27:30 +02:00
# HELP node_bcache_cache_available_percent Percentage of cache device without dirty data, usable for writeback (may contain clean cached data).
Add bcache collector (#597)
* Add bcache collector for Linux
This collector gathers metrics related to the Linux block cache
(bcache) from sysfs.
* Removed commented out code
* Use project comment style
* Add _sectors to metric name to indicate unit
* Really use project comment style
* Rename bcache.go to bcache_linux.go
* Keep collector namespace clean
Rename:
- metric -> bcacheMetric
- periodStatsToMetrics -> bcachePeriodStatsToMetric
* Shorten slice initialization
* Change label names to backing_device, cache_device
* Remove five minute metrics (keep only total)
* Include units in additional metric names
* Enable bcache collector by default
* Provide metrics in seconds, not nanoseconds
* remove metrics with label "all"
* Add fixtures, update end-to-end for bcache collector
* Move fixtures/sys into tar.gz
This changeset moves the collector/fixtures/sys directory into
collector/fixtures/sys.tar.gz and tweaks the Makefile to unpack the
tarball before tests are run.
The reason for this change is that Windows does not allow colons in a
path (colons are present in some of the bcache fixture files), nor can
it (out of the box) deal with pathnames longer than 260 characters
(which we would be increasingly likely to hit if we tried to replace
colons with longer codes that are guaranteed not the turn up in regular
file names).
* Add ttar: plain text archive, replacement for tar
This changeset adds ttar, a plain text replacement for tar, and uses it
for the sysfs fixture archive. The syntax is loosely based on tar(1).
Using a plain text archive makes it possible to review changes without
downloading and extracting the archive. Also, when working on the repo,
git diff and git log become useful again, allowing a committer to verify
and track changes over time.
The code is written in bash, because bash is available out of the box on
all major flavors of Linux and on macOS. The feature set used is
restricted to bash version 3.2 because that is what Apple is still
shipping.
The programm also works on Windows if bash is installed. Obviously, it
does not solve the Windows limitations (path length limited to 260
characters, no symbolic links) that prompted the move to an archive
format in the first place.
2017-07-07 07:20:18 +02:00
# TYPE node_bcache_cache_available_percent gauge
node_bcache_cache_available_percent{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 100
# HELP node_bcache_cache_bypass_hits_total Hits for IO intended to skip the cache.
# TYPE node_bcache_cache_bypass_hits_total counter
node_bcache_cache_bypass_hits_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_cache_bypass_misses_total Misses for IO intended to skip the cache.
# TYPE node_bcache_cache_bypass_misses_total counter
node_bcache_cache_bypass_misses_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_cache_hits_total Hits counted per individual IO as bcache sees them.
# TYPE node_bcache_cache_hits_total counter
node_bcache_cache_hits_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 546
# HELP node_bcache_cache_miss_collisions_total Instances where data insertion from cache miss raced with write (data already present).
# TYPE node_bcache_cache_miss_collisions_total counter
node_bcache_cache_miss_collisions_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_cache_misses_total Misses counted per individual IO as bcache sees them.
# TYPE node_bcache_cache_misses_total counter
node_bcache_cache_misses_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
2018-02-12 18:53:31 +01:00
# HELP node_bcache_cache_read_races_total Counts instances where while data was being read from the cache, the bucket was reused and invalidated - i.e. where the pointer was stale after the read completed.
# TYPE node_bcache_cache_read_races_total counter
node_bcache_cache_read_races_total{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
Add bcache collector (#597)
* Add bcache collector for Linux
This collector gathers metrics related to the Linux block cache
(bcache) from sysfs.
* Removed commented out code
* Use project comment style
* Add _sectors to metric name to indicate unit
* Really use project comment style
* Rename bcache.go to bcache_linux.go
* Keep collector namespace clean
Rename:
- metric -> bcacheMetric
- periodStatsToMetrics -> bcachePeriodStatsToMetric
* Shorten slice initialization
* Change label names to backing_device, cache_device
* Remove five minute metrics (keep only total)
* Include units in additional metric names
* Enable bcache collector by default
* Provide metrics in seconds, not nanoseconds
* remove metrics with label "all"
* Add fixtures, update end-to-end for bcache collector
* Move fixtures/sys into tar.gz
This changeset moves the collector/fixtures/sys directory into
collector/fixtures/sys.tar.gz and tweaks the Makefile to unpack the
tarball before tests are run.
The reason for this change is that Windows does not allow colons in a
path (colons are present in some of the bcache fixture files), nor can
it (out of the box) deal with pathnames longer than 260 characters
(which we would be increasingly likely to hit if we tried to replace
colons with longer codes that are guaranteed not the turn up in regular
file names).
* Add ttar: plain text archive, replacement for tar
This changeset adds ttar, a plain text replacement for tar, and uses it
for the sysfs fixture archive. The syntax is loosely based on tar(1).
Using a plain text archive makes it possible to review changes without
downloading and extracting the archive. Also, when working on the repo,
git diff and git log become useful again, allowing a committer to verify
and track changes over time.
The code is written in bash, because bash is available out of the box on
all major flavors of Linux and on macOS. The feature set used is
restricted to bash version 3.2 because that is what Apple is still
shipping.
The programm also works on Windows if bash is installed. Obviously, it
does not solve the Windows limitations (path length limited to 260
characters, no symbolic links) that prompted the move to an archive
format in the first place.
2017-07-07 07:20:18 +02:00
# HELP node_bcache_cache_readaheads_total Count of times readahead occurred.
# TYPE node_bcache_cache_readaheads_total counter
node_bcache_cache_readaheads_total{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_congested Congestion.
# TYPE node_bcache_congested gauge
node_bcache_congested{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_dirty_data_bytes Amount of dirty data for this backing device in the cache.
# TYPE node_bcache_dirty_data_bytes gauge
node_bcache_dirty_data_bytes{backing_device="bdev0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_io_errors Number of errors that have occurred, decayed by io_error_halflife.
# TYPE node_bcache_io_errors gauge
node_bcache_io_errors{cache_device="cache0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_metadata_written_bytes_total Sum of all non data writes (btree writes and all other metadata).
# TYPE node_bcache_metadata_written_bytes_total counter
node_bcache_metadata_written_bytes_total{cache_device="cache0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 512
# HELP node_bcache_priority_stats_metadata_percent Bcache's metadata overhead.
# TYPE node_bcache_priority_stats_metadata_percent gauge
node_bcache_priority_stats_metadata_percent{cache_device="cache0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_priority_stats_unused_percent The percentage of the cache that doesn't contain any data.
# TYPE node_bcache_priority_stats_unused_percent gauge
node_bcache_priority_stats_unused_percent{cache_device="cache0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 99
# HELP node_bcache_root_usage_percent Percentage of the root btree node in use (tree depth increases if too high).
# TYPE node_bcache_root_usage_percent gauge
node_bcache_root_usage_percent{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_tree_depth Depth of the btree.
# TYPE node_bcache_tree_depth gauge
node_bcache_tree_depth{uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
# HELP node_bcache_written_bytes_total Sum of all data that has been written to the cache.
# TYPE node_bcache_written_bytes_total counter
node_bcache_written_bytes_total{cache_device="cache0",uuid="deaddd54-c735-46d5-868e-f331c5fd7c74"} 0
2016-12-28 15:21:31 +01:00
# HELP node_bonding_active Number of active slaves per bonding interface.
# TYPE node_bonding_active gauge
node_bonding_active{master="bond0"} 0
node_bonding_active{master="dmz"} 2
node_bonding_active{master="int"} 1
# HELP node_bonding_slaves Number of configured slaves per bonding interface.
# TYPE node_bonding_slaves gauge
node_bonding_slaves{master="bond0"} 0
node_bonding_slaves{master="dmz"} 2
node_bonding_slaves{master="int"} 2
2018-01-17 17:55:55 +01:00
# HELP node_boot_time_seconds Node boot time, in unixtime.
# TYPE node_boot_time_seconds gauge
node_boot_time_seconds 1.418183276e+09
2020-02-19 15:48:51 +01:00
# HELP node_btrfs_allocation_ratio Data allocation ratio for a layout/data type
# TYPE node_btrfs_allocation_ratio gauge
node_btrfs_allocation_ratio{block_group_type="data",mode="raid0",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1
node_btrfs_allocation_ratio{block_group_type="data",mode="raid5",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.3333333333333333
node_btrfs_allocation_ratio{block_group_type="metadata",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 2
node_btrfs_allocation_ratio{block_group_type="metadata",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 2
node_btrfs_allocation_ratio{block_group_type="system",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 2
node_btrfs_allocation_ratio{block_group_type="system",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 2
# HELP node_btrfs_device_size_bytes Size of a device that is part of the filesystem.
# TYPE node_btrfs_device_size_bytes gauge
node_btrfs_device_size_bytes{device="loop22",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.073741824e+10
node_btrfs_device_size_bytes{device="loop23",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.073741824e+10
node_btrfs_device_size_bytes{device="loop24",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.073741824e+10
node_btrfs_device_size_bytes{device="loop25",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1.073741824e+10
node_btrfs_device_size_bytes{device="loop25",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.073741824e+10
node_btrfs_device_size_bytes{device="loop26",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1.073741824e+10
# HELP node_btrfs_global_rsv_size_bytes Size of global reserve.
# TYPE node_btrfs_global_rsv_size_bytes gauge
node_btrfs_global_rsv_size_bytes{uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1.6777216e+07
node_btrfs_global_rsv_size_bytes{uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.6777216e+07
# HELP node_btrfs_info Filesystem information
# TYPE node_btrfs_info gauge
node_btrfs_info{label="",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1
node_btrfs_info{label="fixture",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1
# HELP node_btrfs_reserved_bytes Amount of space reserved for a data type
# TYPE node_btrfs_reserved_bytes gauge
node_btrfs_reserved_bytes{block_group_type="data",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 0
node_btrfs_reserved_bytes{block_group_type="data",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 0
node_btrfs_reserved_bytes{block_group_type="metadata",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 0
node_btrfs_reserved_bytes{block_group_type="metadata",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 0
node_btrfs_reserved_bytes{block_group_type="system",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 0
node_btrfs_reserved_bytes{block_group_type="system",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 0
# HELP node_btrfs_size_bytes Amount of space allocated for a layout/data type
# TYPE node_btrfs_size_bytes gauge
node_btrfs_size_bytes{block_group_type="data",mode="raid0",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 2.147483648e+09
node_btrfs_size_bytes{block_group_type="data",mode="raid5",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 6.44087808e+08
node_btrfs_size_bytes{block_group_type="metadata",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 1.073741824e+09
node_btrfs_size_bytes{block_group_type="metadata",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 4.29391872e+08
node_btrfs_size_bytes{block_group_type="system",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 8.388608e+06
node_btrfs_size_bytes{block_group_type="system",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 1.6777216e+07
# HELP node_btrfs_used_bytes Amount of used space by a layout/data type
# TYPE node_btrfs_used_bytes gauge
node_btrfs_used_bytes{block_group_type="data",mode="raid0",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 8.08189952e+08
node_btrfs_used_bytes{block_group_type="data",mode="raid5",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 0
node_btrfs_used_bytes{block_group_type="metadata",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 933888
node_btrfs_used_bytes{block_group_type="metadata",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 114688
node_btrfs_used_bytes{block_group_type="system",mode="raid1",uuid="0abb23a9-579b-43e6-ad30-227ef47fcb9d"} 16384
node_btrfs_used_bytes{block_group_type="system",mode="raid6",uuid="7f07c59f-6136-449c-ab87-e1cf2328731b"} 16384
2018-02-12 18:53:31 +01:00
# HELP node_buddyinfo_blocks Count of free blocks according to size.
# TYPE node_buddyinfo_blocks gauge
node_buddyinfo_blocks{node="0",size="0",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="0",zone="DMA32"} 759
node_buddyinfo_blocks{node="0",size="0",zone="Normal"} 4381
node_buddyinfo_blocks{node="0",size="1",zone="DMA"} 0
node_buddyinfo_blocks{node="0",size="1",zone="DMA32"} 572
node_buddyinfo_blocks{node="0",size="1",zone="Normal"} 1093
node_buddyinfo_blocks{node="0",size="10",zone="DMA"} 3
node_buddyinfo_blocks{node="0",size="10",zone="DMA32"} 0
node_buddyinfo_blocks{node="0",size="10",zone="Normal"} 0
node_buddyinfo_blocks{node="0",size="2",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="2",zone="DMA32"} 791
node_buddyinfo_blocks{node="0",size="2",zone="Normal"} 185
node_buddyinfo_blocks{node="0",size="3",zone="DMA"} 0
node_buddyinfo_blocks{node="0",size="3",zone="DMA32"} 475
node_buddyinfo_blocks{node="0",size="3",zone="Normal"} 1530
node_buddyinfo_blocks{node="0",size="4",zone="DMA"} 2
node_buddyinfo_blocks{node="0",size="4",zone="DMA32"} 194
node_buddyinfo_blocks{node="0",size="4",zone="Normal"} 567
node_buddyinfo_blocks{node="0",size="5",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="5",zone="DMA32"} 45
node_buddyinfo_blocks{node="0",size="5",zone="Normal"} 102
node_buddyinfo_blocks{node="0",size="6",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="6",zone="DMA32"} 12
node_buddyinfo_blocks{node="0",size="6",zone="Normal"} 4
node_buddyinfo_blocks{node="0",size="7",zone="DMA"} 0
node_buddyinfo_blocks{node="0",size="7",zone="DMA32"} 0
node_buddyinfo_blocks{node="0",size="7",zone="Normal"} 0
node_buddyinfo_blocks{node="0",size="8",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="8",zone="DMA32"} 0
node_buddyinfo_blocks{node="0",size="8",zone="Normal"} 0
node_buddyinfo_blocks{node="0",size="9",zone="DMA"} 1
node_buddyinfo_blocks{node="0",size="9",zone="DMA32"} 0
node_buddyinfo_blocks{node="0",size="9",zone="Normal"} 0
2018-01-17 17:55:55 +01:00
# HELP node_context_switches_total Total number of context switches.
# TYPE node_context_switches_total counter
node_context_switches_total 3.8014093e+07
2019-08-12 05:52:16 +02:00
# HELP node_cooling_device_cur_state Current throttle state of the cooling device
# TYPE node_cooling_device_cur_state gauge
node_cooling_device_cur_state{name="0",type="Processor"} 0
# HELP node_cooling_device_max_state Maximum throttle state of the cooling device
# TYPE node_cooling_device_max_state gauge
node_cooling_device_max_state{name="0",type="Processor"} 3
2017-06-13 11:21:53 +02:00
# HELP node_cpu_core_throttles_total Number of times this cpu core has been throttled.
# TYPE node_cpu_core_throttles_total counter
2018-04-09 18:01:52 +02:00
node_cpu_core_throttles_total{core="0",package="0"} 5
node_cpu_core_throttles_total{core="0",package="1"} 0
node_cpu_core_throttles_total{core="1",package="0"} 0
node_cpu_core_throttles_total{core="1",package="1"} 9
2017-12-21 16:24:23 +01:00
# HELP node_cpu_guest_seconds_total Seconds the cpus spent in guests (VMs) for each mode.
2017-11-23 15:04:47 +01:00
# TYPE node_cpu_guest_seconds_total counter
node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0.01
node_cpu_guest_seconds_total{cpu="0",mode="user"} 0.02
node_cpu_guest_seconds_total{cpu="1",mode="nice"} 0.02
node_cpu_guest_seconds_total{cpu="1",mode="user"} 0.03
node_cpu_guest_seconds_total{cpu="2",mode="nice"} 0.03
node_cpu_guest_seconds_total{cpu="2",mode="user"} 0.04
node_cpu_guest_seconds_total{cpu="3",mode="nice"} 0.04
node_cpu_guest_seconds_total{cpu="3",mode="user"} 0.05
node_cpu_guest_seconds_total{cpu="4",mode="nice"} 0.05
node_cpu_guest_seconds_total{cpu="4",mode="user"} 0.06
node_cpu_guest_seconds_total{cpu="5",mode="nice"} 0.06
node_cpu_guest_seconds_total{cpu="5",mode="user"} 0.07
node_cpu_guest_seconds_total{cpu="6",mode="nice"} 0.07
node_cpu_guest_seconds_total{cpu="6",mode="user"} 0.08
node_cpu_guest_seconds_total{cpu="7",mode="nice"} 0.08
node_cpu_guest_seconds_total{cpu="7",mode="user"} 0.09
2019-09-11 23:06:36 +02:00
# HELP node_cpu_info CPU information from /proc/cpuinfo.
# TYPE node_cpu_info gauge
2020-02-20 17:36:02 +01:00
node_cpu_info{cachesize="8192 KB",core="0",cpu="0",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="0",cpu="4",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="1",cpu="1",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="1",cpu="5",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="2",cpu="2",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="2",cpu="6",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="3",cpu="3",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
node_cpu_info{cachesize="8192 KB",core="3",cpu="7",family="6",microcode="0xb4",model="142",model_name="Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz",package="0",stepping="10",vendor="GenuineIntel"} 1
2017-06-13 11:21:53 +02:00
# HELP node_cpu_package_throttles_total Number of times this cpu package has been throttled.
# TYPE node_cpu_package_throttles_total counter
2018-04-09 18:01:52 +02:00
node_cpu_package_throttles_total{package="0"} 30
node_cpu_package_throttles_total{package="1"} 6
2019-02-04 16:54:41 +01:00
# HELP node_cpu_scaling_frequency_hertz Current scaled cpu thread frequency in hertz.
# TYPE node_cpu_scaling_frequency_hertz gauge
node_cpu_scaling_frequency_hertz{cpu="0"} 1.699981e+09
node_cpu_scaling_frequency_hertz{cpu="1"} 1.699981e+09
node_cpu_scaling_frequency_hertz{cpu="2"} 8e+06
node_cpu_scaling_frequency_hertz{cpu="3"} 8e+06
2019-10-10 18:59:06 +02:00
# HELP node_cpu_scaling_frequency_max_hertz Maximum scaled cpu thread frequency in hertz.
# TYPE node_cpu_scaling_frequency_max_hertz gauge
node_cpu_scaling_frequency_max_hertz{cpu="0"} 3.7e+09
node_cpu_scaling_frequency_max_hertz{cpu="1"} 3.7e+09
node_cpu_scaling_frequency_max_hertz{cpu="2"} 4.2e+09
node_cpu_scaling_frequency_max_hertz{cpu="3"} 4.2e+09
# HELP node_cpu_scaling_frequency_min_hertz Minimum scaled cpu thread frequency in hertz.
# TYPE node_cpu_scaling_frequency_min_hertz gauge
node_cpu_scaling_frequency_min_hertz{cpu="0"} 8e+08
node_cpu_scaling_frequency_min_hertz{cpu="1"} 8e+08
node_cpu_scaling_frequency_min_hertz{cpu="2"} 1e+06
node_cpu_scaling_frequency_min_hertz{cpu="3"} 1e+06
2018-01-17 17:55:55 +01:00
# HELP node_cpu_seconds_total Seconds the cpus spent in each mode.
# TYPE node_cpu_seconds_total counter
2018-02-01 18:42:20 +01:00
node_cpu_seconds_total{cpu="0",mode="idle"} 10870.69
node_cpu_seconds_total{cpu="0",mode="iowait"} 2.2
node_cpu_seconds_total{cpu="0",mode="irq"} 0.01
node_cpu_seconds_total{cpu="0",mode="nice"} 0.19
node_cpu_seconds_total{cpu="0",mode="softirq"} 34.1
node_cpu_seconds_total{cpu="0",mode="steal"} 0
node_cpu_seconds_total{cpu="0",mode="system"} 210.45
node_cpu_seconds_total{cpu="0",mode="user"} 444.9
node_cpu_seconds_total{cpu="1",mode="idle"} 11107.87
node_cpu_seconds_total{cpu="1",mode="iowait"} 5.91
node_cpu_seconds_total{cpu="1",mode="irq"} 0
node_cpu_seconds_total{cpu="1",mode="nice"} 0.23
node_cpu_seconds_total{cpu="1",mode="softirq"} 0.46
node_cpu_seconds_total{cpu="1",mode="steal"} 0
node_cpu_seconds_total{cpu="1",mode="system"} 164.74
node_cpu_seconds_total{cpu="1",mode="user"} 478.69
node_cpu_seconds_total{cpu="2",mode="idle"} 11123.21
node_cpu_seconds_total{cpu="2",mode="iowait"} 4.41
node_cpu_seconds_total{cpu="2",mode="irq"} 0
node_cpu_seconds_total{cpu="2",mode="nice"} 0.36
node_cpu_seconds_total{cpu="2",mode="softirq"} 3.26
node_cpu_seconds_total{cpu="2",mode="steal"} 0
node_cpu_seconds_total{cpu="2",mode="system"} 159.16
node_cpu_seconds_total{cpu="2",mode="user"} 465.04
node_cpu_seconds_total{cpu="3",mode="idle"} 11132.3
node_cpu_seconds_total{cpu="3",mode="iowait"} 5.33
node_cpu_seconds_total{cpu="3",mode="irq"} 0
node_cpu_seconds_total{cpu="3",mode="nice"} 1.02
node_cpu_seconds_total{cpu="3",mode="softirq"} 0.6
node_cpu_seconds_total{cpu="3",mode="steal"} 0
node_cpu_seconds_total{cpu="3",mode="system"} 156.83
node_cpu_seconds_total{cpu="3",mode="user"} 470.54
node_cpu_seconds_total{cpu="4",mode="idle"} 11403.21
node_cpu_seconds_total{cpu="4",mode="iowait"} 2.17
node_cpu_seconds_total{cpu="4",mode="irq"} 0
node_cpu_seconds_total{cpu="4",mode="nice"} 0.25
node_cpu_seconds_total{cpu="4",mode="softirq"} 0.08
node_cpu_seconds_total{cpu="4",mode="steal"} 0
node_cpu_seconds_total{cpu="4",mode="system"} 107.76
node_cpu_seconds_total{cpu="4",mode="user"} 284.13
node_cpu_seconds_total{cpu="5",mode="idle"} 11362.7
node_cpu_seconds_total{cpu="5",mode="iowait"} 6.72
node_cpu_seconds_total{cpu="5",mode="irq"} 0
node_cpu_seconds_total{cpu="5",mode="nice"} 1.01
node_cpu_seconds_total{cpu="5",mode="softirq"} 0.3
node_cpu_seconds_total{cpu="5",mode="steal"} 0
node_cpu_seconds_total{cpu="5",mode="system"} 115.86
node_cpu_seconds_total{cpu="5",mode="user"} 292.71
node_cpu_seconds_total{cpu="6",mode="idle"} 11397.21
node_cpu_seconds_total{cpu="6",mode="iowait"} 3.19
node_cpu_seconds_total{cpu="6",mode="irq"} 0
node_cpu_seconds_total{cpu="6",mode="nice"} 0.36
node_cpu_seconds_total{cpu="6",mode="softirq"} 0.29
node_cpu_seconds_total{cpu="6",mode="steal"} 0
node_cpu_seconds_total{cpu="6",mode="system"} 102.76
node_cpu_seconds_total{cpu="6",mode="user"} 291.52
node_cpu_seconds_total{cpu="7",mode="idle"} 11392.82
node_cpu_seconds_total{cpu="7",mode="iowait"} 5.55
node_cpu_seconds_total{cpu="7",mode="irq"} 0
node_cpu_seconds_total{cpu="7",mode="nice"} 2.68
node_cpu_seconds_total{cpu="7",mode="softirq"} 0.31
node_cpu_seconds_total{cpu="7",mode="steal"} 0
node_cpu_seconds_total{cpu="7",mode="system"} 101.64
node_cpu_seconds_total{cpu="7",mode="user"} 290.98
2018-10-15 17:24:28 +02:00
# HELP node_disk_discard_time_seconds_total This is the total number of seconds spent by all discards.
# TYPE node_disk_discard_time_seconds_total counter
node_disk_discard_time_seconds_total{device="sdb"} 11.13
2019-11-25 20:16:15 +01:00
node_disk_discard_time_seconds_total{device="sdc"} 11.13
2018-10-30 18:45:00 +01:00
# HELP node_disk_discarded_sectors_total The total number of sectors discarded successfully.
2018-10-15 17:24:28 +02:00
# TYPE node_disk_discarded_sectors_total counter
node_disk_discarded_sectors_total{device="sdb"} 1.925173784e+09
2019-11-25 20:16:15 +01:00
node_disk_discarded_sectors_total{device="sdc"} 1.25173784e+08
2018-10-15 17:24:28 +02:00
# HELP node_disk_discards_completed_total The total number of discards completed successfully.
# TYPE node_disk_discards_completed_total counter
node_disk_discards_completed_total{device="sdb"} 68851
2019-11-25 20:16:15 +01:00
node_disk_discards_completed_total{device="sdc"} 18851
2018-10-15 17:24:28 +02:00
# HELP node_disk_discards_merged_total The total number of discards merged.
# TYPE node_disk_discards_merged_total counter
node_disk_discards_merged_total{device="sdb"} 0
2019-11-25 20:16:15 +01:00
node_disk_discards_merged_total{device="sdc"} 0
# HELP node_disk_flush_requests_time_seconds_total This is the total number of seconds spent by all flush requests.
# TYPE node_disk_flush_requests_time_seconds_total counter
node_disk_flush_requests_time_seconds_total{device="sdc"} 1.944
# HELP node_disk_flush_requests_total The total number of flush requests completed successfully
# TYPE node_disk_flush_requests_total counter
node_disk_flush_requests_total{device="sdc"} 1555
2015-09-26 20:54:49 +02:00
# HELP node_disk_io_now The number of I/Os currently in progress.
# TYPE node_disk_io_now gauge
node_disk_io_now{device="dm-0"} 0
node_disk_io_now{device="dm-1"} 0
node_disk_io_now{device="dm-2"} 0
node_disk_io_now{device="dm-3"} 0
node_disk_io_now{device="dm-4"} 0
node_disk_io_now{device="dm-5"} 0
node_disk_io_now{device="mmcblk0"} 0
node_disk_io_now{device="mmcblk0p1"} 0
node_disk_io_now{device="mmcblk0p2"} 0
2016-07-09 15:36:52 +02:00
node_disk_io_now{device="nvme0n1"} 0
2015-09-26 20:54:49 +02:00
node_disk_io_now{device="sda"} 0
2018-10-15 17:24:28 +02:00
node_disk_io_now{device="sdb"} 0
2019-11-25 20:16:15 +01:00
node_disk_io_now{device="sdc"} 0
2015-09-26 20:54:49 +02:00
node_disk_io_now{device="sr0"} 0
node_disk_io_now{device="vda"} 0
2018-01-17 17:55:55 +01:00
# HELP node_disk_io_time_seconds_total Total seconds spent doing I/Os.
# TYPE node_disk_io_time_seconds_total counter
node_disk_io_time_seconds_total{device="dm-0"} 11325.968
node_disk_io_time_seconds_total{device="dm-1"} 0.076
node_disk_io_time_seconds_total{device="dm-2"} 65.4
node_disk_io_time_seconds_total{device="dm-3"} 0.016
node_disk_io_time_seconds_total{device="dm-4"} 0.024
node_disk_io_time_seconds_total{device="dm-5"} 58.848
node_disk_io_time_seconds_total{device="mmcblk0"} 0.136
node_disk_io_time_seconds_total{device="mmcblk0p1"} 0.024
node_disk_io_time_seconds_total{device="mmcblk0p2"} 0.068
node_disk_io_time_seconds_total{device="nvme0n1"} 222.766
node_disk_io_time_seconds_total{device="sda"} 9653.880000000001
2018-10-15 17:24:28 +02:00
node_disk_io_time_seconds_total{device="sdb"} 60.730000000000004
2019-11-25 20:16:15 +01:00
node_disk_io_time_seconds_total{device="sdc"} 10.73
2018-01-17 17:55:55 +01:00
node_disk_io_time_seconds_total{device="sr0"} 0
node_disk_io_time_seconds_total{device="vda"} 41614.592000000004
2018-10-15 17:24:28 +02:00
# HELP node_disk_io_time_weighted_seconds_total The weighted # of seconds spent doing I/Os.
2018-01-17 17:55:55 +01:00
# TYPE node_disk_io_time_weighted_seconds_total counter
node_disk_io_time_weighted_seconds_total{device="dm-0"} 1.206301256e+06
node_disk_io_time_weighted_seconds_total{device="dm-1"} 0.084
node_disk_io_time_weighted_seconds_total{device="dm-2"} 129.416
node_disk_io_time_weighted_seconds_total{device="dm-3"} 0.10400000000000001
node_disk_io_time_weighted_seconds_total{device="dm-4"} 0.044
node_disk_io_time_weighted_seconds_total{device="dm-5"} 105.632
node_disk_io_time_weighted_seconds_total{device="mmcblk0"} 0.156
node_disk_io_time_weighted_seconds_total{device="mmcblk0p1"} 0.024
node_disk_io_time_weighted_seconds_total{device="mmcblk0p2"} 0.068
node_disk_io_time_weighted_seconds_total{device="nvme0n1"} 1032.546
node_disk_io_time_weighted_seconds_total{device="sda"} 82621.804
2018-10-15 17:24:28 +02:00
node_disk_io_time_weighted_seconds_total{device="sdb"} 67.07000000000001
2019-11-25 20:16:15 +01:00
node_disk_io_time_weighted_seconds_total{device="sdc"} 17.07
2018-01-17 17:55:55 +01:00
node_disk_io_time_weighted_seconds_total{device="sr0"} 0
node_disk_io_time_weighted_seconds_total{device="vda"} 2.0778722280000001e+06
# HELP node_disk_read_bytes_total The total number of bytes read successfully.
# TYPE node_disk_read_bytes_total counter
node_disk_read_bytes_total{device="dm-0"} 5.13708655616e+11
node_disk_read_bytes_total{device="dm-1"} 1.589248e+06
node_disk_read_bytes_total{device="dm-2"} 1.578752e+08
node_disk_read_bytes_total{device="dm-3"} 1.98144e+06
node_disk_read_bytes_total{device="dm-4"} 529408
node_disk_read_bytes_total{device="dm-5"} 4.3150848e+07
node_disk_read_bytes_total{device="mmcblk0"} 798720
node_disk_read_bytes_total{device="mmcblk0p1"} 81920
node_disk_read_bytes_total{device="mmcblk0p2"} 389120
node_disk_read_bytes_total{device="nvme0n1"} 2.377714176e+09
node_disk_read_bytes_total{device="sda"} 5.13713216512e+11
2018-10-15 17:24:28 +02:00
node_disk_read_bytes_total{device="sdb"} 4.944782848e+09
2019-11-25 20:16:15 +01:00
node_disk_read_bytes_total{device="sdc"} 8.48782848e+08
2018-01-17 17:55:55 +01:00
node_disk_read_bytes_total{device="sr0"} 0
node_disk_read_bytes_total{device="vda"} 1.6727491584e+10
2018-09-02 09:46:45 +02:00
# HELP node_disk_read_time_seconds_total The total number of seconds spent by all reads.
2018-01-17 17:55:55 +01:00
# TYPE node_disk_read_time_seconds_total counter
node_disk_read_time_seconds_total{device="dm-0"} 46229.572
node_disk_read_time_seconds_total{device="dm-1"} 0.084
node_disk_read_time_seconds_total{device="dm-2"} 6.5360000000000005
node_disk_read_time_seconds_total{device="dm-3"} 0.10400000000000001
node_disk_read_time_seconds_total{device="dm-4"} 0.028
node_disk_read_time_seconds_total{device="dm-5"} 0.924
node_disk_read_time_seconds_total{device="mmcblk0"} 0.156
node_disk_read_time_seconds_total{device="mmcblk0p1"} 0.024
node_disk_read_time_seconds_total{device="mmcblk0p2"} 0.068
node_disk_read_time_seconds_total{device="nvme0n1"} 21.650000000000002
node_disk_read_time_seconds_total{device="sda"} 18492.372
2018-10-15 17:24:28 +02:00
node_disk_read_time_seconds_total{device="sdb"} 0.084
2019-11-25 20:16:15 +01:00
node_disk_read_time_seconds_total{device="sdc"} 0.014
2018-01-17 17:55:55 +01:00
node_disk_read_time_seconds_total{device="sr0"} 0
node_disk_read_time_seconds_total{device="vda"} 8655.768
# HELP node_disk_reads_completed_total The total number of reads completed successfully.
# TYPE node_disk_reads_completed_total counter
node_disk_reads_completed_total{device="dm-0"} 5.9910002e+07
node_disk_reads_completed_total{device="dm-1"} 388
node_disk_reads_completed_total{device="dm-2"} 11571
node_disk_reads_completed_total{device="dm-3"} 3870
node_disk_reads_completed_total{device="dm-4"} 392
node_disk_reads_completed_total{device="dm-5"} 3729
node_disk_reads_completed_total{device="mmcblk0"} 192
node_disk_reads_completed_total{device="mmcblk0p1"} 17
node_disk_reads_completed_total{device="mmcblk0p2"} 95
node_disk_reads_completed_total{device="nvme0n1"} 47114
node_disk_reads_completed_total{device="sda"} 2.5354637e+07
2018-10-15 17:24:28 +02:00
node_disk_reads_completed_total{device="sdb"} 326552
2019-11-25 20:16:15 +01:00
node_disk_reads_completed_total{device="sdc"} 126552
2018-01-17 17:55:55 +01:00
node_disk_reads_completed_total{device="sr0"} 0
node_disk_reads_completed_total{device="vda"} 1.775784e+06
2018-10-15 17:24:28 +02:00
# HELP node_disk_reads_merged_total The total number of reads merged.
2018-01-17 17:55:55 +01:00
# TYPE node_disk_reads_merged_total counter
node_disk_reads_merged_total{device="dm-0"} 0
node_disk_reads_merged_total{device="dm-1"} 0
node_disk_reads_merged_total{device="dm-2"} 0
node_disk_reads_merged_total{device="dm-3"} 0
node_disk_reads_merged_total{device="dm-4"} 0
node_disk_reads_merged_total{device="dm-5"} 0
node_disk_reads_merged_total{device="mmcblk0"} 3
node_disk_reads_merged_total{device="mmcblk0p1"} 3
node_disk_reads_merged_total{device="mmcblk0p2"} 0
node_disk_reads_merged_total{device="nvme0n1"} 4
node_disk_reads_merged_total{device="sda"} 3.4367663e+07
2018-10-15 17:24:28 +02:00
node_disk_reads_merged_total{device="sdb"} 841
2019-11-25 20:16:15 +01:00
node_disk_reads_merged_total{device="sdc"} 141
2018-01-17 17:55:55 +01:00
node_disk_reads_merged_total{device="sr0"} 0
node_disk_reads_merged_total{device="vda"} 15386
# HELP node_disk_write_time_seconds_total This is the total number of seconds spent by all writes.
# TYPE node_disk_write_time_seconds_total counter
node_disk_write_time_seconds_total{device="dm-0"} 1.1585578e+06
node_disk_write_time_seconds_total{device="dm-1"} 0
node_disk_write_time_seconds_total{device="dm-2"} 122.884
node_disk_write_time_seconds_total{device="dm-3"} 0
node_disk_write_time_seconds_total{device="dm-4"} 0.016
node_disk_write_time_seconds_total{device="dm-5"} 104.684
node_disk_write_time_seconds_total{device="mmcblk0"} 0
node_disk_write_time_seconds_total{device="mmcblk0p1"} 0
node_disk_write_time_seconds_total{device="mmcblk0p2"} 0
node_disk_write_time_seconds_total{device="nvme0n1"} 1011.053
node_disk_write_time_seconds_total{device="sda"} 63877.96
2018-10-15 17:24:28 +02:00
node_disk_write_time_seconds_total{device="sdb"} 5.007
2019-11-25 20:16:15 +01:00
node_disk_write_time_seconds_total{device="sdc"} 1.0070000000000001
2018-01-17 17:55:55 +01:00
node_disk_write_time_seconds_total{device="sr0"} 0
node_disk_write_time_seconds_total{device="vda"} 2.069221364e+06
# HELP node_disk_writes_completed_total The total number of writes completed successfully.
# TYPE node_disk_writes_completed_total counter
node_disk_writes_completed_total{device="dm-0"} 3.9231014e+07
node_disk_writes_completed_total{device="dm-1"} 74
node_disk_writes_completed_total{device="dm-2"} 153522
node_disk_writes_completed_total{device="dm-3"} 0
node_disk_writes_completed_total{device="dm-4"} 38
node_disk_writes_completed_total{device="dm-5"} 98918
node_disk_writes_completed_total{device="mmcblk0"} 0
node_disk_writes_completed_total{device="mmcblk0p1"} 0
node_disk_writes_completed_total{device="mmcblk0p2"} 0
node_disk_writes_completed_total{device="nvme0n1"} 1.07832e+06
node_disk_writes_completed_total{device="sda"} 2.8444756e+07
2018-10-15 17:24:28 +02:00
node_disk_writes_completed_total{device="sdb"} 41822
2019-11-25 20:16:15 +01:00
node_disk_writes_completed_total{device="sdc"} 11822
2018-01-17 17:55:55 +01:00
node_disk_writes_completed_total{device="sr0"} 0
node_disk_writes_completed_total{device="vda"} 6.038856e+06
2018-10-15 17:24:28 +02:00
# HELP node_disk_writes_merged_total The number of writes merged.
2018-01-17 17:55:55 +01:00
# TYPE node_disk_writes_merged_total counter
node_disk_writes_merged_total{device="dm-0"} 0
node_disk_writes_merged_total{device="dm-1"} 0
node_disk_writes_merged_total{device="dm-2"} 0
node_disk_writes_merged_total{device="dm-3"} 0
node_disk_writes_merged_total{device="dm-4"} 0
node_disk_writes_merged_total{device="dm-5"} 0
node_disk_writes_merged_total{device="mmcblk0"} 0
node_disk_writes_merged_total{device="mmcblk0p1"} 0
node_disk_writes_merged_total{device="mmcblk0p2"} 0
node_disk_writes_merged_total{device="nvme0n1"} 43950
node_disk_writes_merged_total{device="sda"} 1.1134226e+07
2018-10-15 17:24:28 +02:00
node_disk_writes_merged_total{device="sdb"} 2895
2019-11-25 20:16:15 +01:00
node_disk_writes_merged_total{device="sdc"} 1895
2018-01-17 17:55:55 +01:00
node_disk_writes_merged_total{device="sr0"} 0
node_disk_writes_merged_total{device="vda"} 2.0711856e+07
# HELP node_disk_written_bytes_total The total number of bytes written successfully.
# TYPE node_disk_written_bytes_total counter
node_disk_written_bytes_total{device="dm-0"} 2.5891680256e+11
node_disk_written_bytes_total{device="dm-1"} 303104
node_disk_written_bytes_total{device="dm-2"} 2.607828992e+09
node_disk_written_bytes_total{device="dm-3"} 0
node_disk_written_bytes_total{device="dm-4"} 70144
node_disk_written_bytes_total{device="dm-5"} 5.89664256e+08
node_disk_written_bytes_total{device="mmcblk0"} 0
node_disk_written_bytes_total{device="mmcblk0p1"} 0
node_disk_written_bytes_total{device="mmcblk0p2"} 0
node_disk_written_bytes_total{device="nvme0n1"} 2.0199236096e+10
node_disk_written_bytes_total{device="sda"} 2.58916880384e+11
2018-10-15 17:24:28 +02:00
node_disk_written_bytes_total{device="sdb"} 1.01012736e+09
2019-11-25 20:16:15 +01:00
node_disk_written_bytes_total{device="sdc"} 8.852736e+07
2018-01-17 17:55:55 +01:00
node_disk_written_bytes_total{device="sr0"} 0
node_disk_written_bytes_total{device="vda"} 1.0938236928e+11
2016-12-22 13:57:19 +01:00
# HELP node_drbd_activitylog_writes_total Number of updates of the activity log area of the meta data.
# TYPE node_drbd_activitylog_writes_total counter
node_drbd_activitylog_writes_total{device="drbd1"} 1100
2016-12-05 11:37:35 +01:00
# HELP node_drbd_application_pending Number of block I/O requests forwarded to DRBD, but not yet answered by DRBD.
# TYPE node_drbd_application_pending gauge
node_drbd_application_pending{device="drbd1"} 12348
2016-12-22 13:57:19 +01:00
# HELP node_drbd_bitmap_writes_total Number of updates of the bitmap area of the meta data.
# TYPE node_drbd_bitmap_writes_total counter
node_drbd_bitmap_writes_total{device="drbd1"} 221
2016-12-23 15:55:49 +01:00
# HELP node_drbd_connected Whether DRBD is connected to the peer.
2016-12-05 11:37:35 +01:00
# TYPE node_drbd_connected gauge
node_drbd_connected{device="drbd1"} 1
2016-12-22 13:57:19 +01:00
# HELP node_drbd_disk_read_bytes_total Net data read from local hard disk; in bytes.
# TYPE node_drbd_disk_read_bytes_total counter
node_drbd_disk_read_bytes_total{device="drbd1"} 1.2154539008e+11
2016-12-05 11:37:35 +01:00
# HELP node_drbd_disk_state_is_up_to_date Whether the disk of the node is up to date.
# TYPE node_drbd_disk_state_is_up_to_date gauge
node_drbd_disk_state_is_up_to_date{device="drbd1",node="local"} 1
node_drbd_disk_state_is_up_to_date{device="drbd1",node="remote"} 1
2016-12-22 13:57:19 +01:00
# HELP node_drbd_disk_written_bytes_total Net data written on local hard disk; in bytes.
# TYPE node_drbd_disk_written_bytes_total counter
node_drbd_disk_written_bytes_total{device="drbd1"} 2.8941845504e+10
2016-12-05 11:37:35 +01:00
# HELP node_drbd_epochs Number of Epochs currently on the fly.
# TYPE node_drbd_epochs gauge
node_drbd_epochs{device="drbd1"} 1
# HELP node_drbd_local_pending Number of open requests to the local I/O sub-system.
# TYPE node_drbd_local_pending gauge
node_drbd_local_pending{device="drbd1"} 12345
2016-12-23 15:55:49 +01:00
# HELP node_drbd_network_received_bytes_total Total number of bytes received via the network.
2016-12-22 13:57:19 +01:00
# TYPE node_drbd_network_received_bytes_total counter
node_drbd_network_received_bytes_total{device="drbd1"} 1.0961011e+07
2016-12-23 15:55:49 +01:00
# HELP node_drbd_network_sent_bytes_total Total number of bytes sent via the network.
2016-12-22 13:57:19 +01:00
# TYPE node_drbd_network_sent_bytes_total counter
node_drbd_network_sent_bytes_total{device="drbd1"} 1.7740228608e+10
2016-12-05 11:37:35 +01:00
# HELP node_drbd_node_role_is_primary Whether the role of the node is in the primary state.
# TYPE node_drbd_node_role_is_primary gauge
node_drbd_node_role_is_primary{device="drbd1",node="local"} 1
node_drbd_node_role_is_primary{device="drbd1",node="remote"} 1
2016-12-22 13:57:19 +01:00
# HELP node_drbd_out_of_sync_bytes Amount of data known to be out of sync; in bytes.
2016-12-05 11:37:35 +01:00
# TYPE node_drbd_out_of_sync_bytes gauge
node_drbd_out_of_sync_bytes{device="drbd1"} 1.2645376e+07
2016-12-23 15:55:49 +01:00
# HELP node_drbd_remote_pending Number of requests sent to the peer, but that have not yet been answered by the latter.
2016-12-05 11:37:35 +01:00
# TYPE node_drbd_remote_pending gauge
node_drbd_remote_pending{device="drbd1"} 12346
2016-12-23 15:55:49 +01:00
# HELP node_drbd_remote_unacknowledged Number of requests received by the peer via the network connection, but that have not yet been answered.
2016-12-05 11:37:35 +01:00
# TYPE node_drbd_remote_unacknowledged gauge
node_drbd_remote_unacknowledged{device="drbd1"} 12347
2017-01-08 12:58:04 +01:00
# HELP node_edac_correctable_errors_total Total correctable memory errors.
# TYPE node_edac_correctable_errors_total counter
node_edac_correctable_errors_total{controller="0"} 1
# HELP node_edac_csrow_correctable_errors_total Total correctable memory errors for this csrow.
# TYPE node_edac_csrow_correctable_errors_total counter
node_edac_csrow_correctable_errors_total{controller="0",csrow="0"} 3
node_edac_csrow_correctable_errors_total{controller="0",csrow="unknown"} 2
# HELP node_edac_csrow_uncorrectable_errors_total Total uncorrectable memory errors for this csrow.
# TYPE node_edac_csrow_uncorrectable_errors_total counter
node_edac_csrow_uncorrectable_errors_total{controller="0",csrow="0"} 4
2017-04-18 12:45:06 +02:00
node_edac_csrow_uncorrectable_errors_total{controller="0",csrow="unknown"} 6
2017-01-08 12:58:04 +01:00
# HELP node_edac_uncorrectable_errors_total Total uncorrectable memory errors.
# TYPE node_edac_uncorrectable_errors_total counter
node_edac_uncorrectable_errors_total{controller="0"} 5
2016-01-14 08:26:04 +01:00
# HELP node_entropy_available_bits Bits of available entropy.
# TYPE node_entropy_available_bits gauge
node_entropy_available_bits 1337
2016-04-30 19:58:17 +02:00
# HELP node_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which node_exporter was built.
# TYPE node_exporter_build_info gauge
2015-09-26 20:54:49 +02:00
# HELP node_filefd_allocated File descriptor statistics: allocated.
# TYPE node_filefd_allocated gauge
node_filefd_allocated 1024
# HELP node_filefd_maximum File descriptor statistics: maximum.
# TYPE node_filefd_maximum gauge
node_filefd_maximum 1.631329e+06
2018-01-17 17:55:55 +01:00
# HELP node_forks_total Total number of forks.
# TYPE node_forks_total counter
node_forks_total 26442
2016-11-29 11:55:08 +01:00
# HELP node_hwmon_chip_names Annotation metric for human-readable chip names
# TYPE node_hwmon_chip_names gauge
node_hwmon_chip_names{chip="nct6779",chip_name="nct6779"} 1
node_hwmon_chip_names{chip="platform_coretemp_0",chip_name="coretemp"} 1
node_hwmon_chip_names{chip="platform_coretemp_1",chip_name="coretemp"} 1
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_alarm Hardware sensor alarm status (fan)
# TYPE node_hwmon_fan_alarm gauge
node_hwmon_fan_alarm{chip="nct6779",sensor="fan2"} 0
# HELP node_hwmon_fan_beep_enabled Hardware monitor sensor has beeping enabled
# TYPE node_hwmon_fan_beep_enabled gauge
node_hwmon_fan_beep_enabled{chip="nct6779",sensor="fan2"} 0
# HELP node_hwmon_fan_manual Hardware monitor fan element manual
# TYPE node_hwmon_fan_manual gauge
2017-01-09 18:33:31 +01:00
node_hwmon_fan_manual{chip="platform_applesmc_768",sensor="fan1"} 0
node_hwmon_fan_manual{chip="platform_applesmc_768",sensor="fan2"} 0
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_max_rpm Hardware monitor for fan revolutions per minute (max)
# TYPE node_hwmon_fan_max_rpm gauge
2017-01-09 18:33:31 +01:00
node_hwmon_fan_max_rpm{chip="platform_applesmc_768",sensor="fan1"} 6156
node_hwmon_fan_max_rpm{chip="platform_applesmc_768",sensor="fan2"} 5700
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_min_rpm Hardware monitor for fan revolutions per minute (min)
# TYPE node_hwmon_fan_min_rpm gauge
node_hwmon_fan_min_rpm{chip="nct6779",sensor="fan2"} 0
2017-01-09 18:33:31 +01:00
node_hwmon_fan_min_rpm{chip="platform_applesmc_768",sensor="fan1"} 2160
node_hwmon_fan_min_rpm{chip="platform_applesmc_768",sensor="fan2"} 2000
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_output Hardware monitor fan element output
# TYPE node_hwmon_fan_output gauge
2017-01-09 18:33:31 +01:00
node_hwmon_fan_output{chip="platform_applesmc_768",sensor="fan1"} 2160
node_hwmon_fan_output{chip="platform_applesmc_768",sensor="fan2"} 2000
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_pulses Hardware monitor fan element pulses
# TYPE node_hwmon_fan_pulses gauge
node_hwmon_fan_pulses{chip="nct6779",sensor="fan2"} 2
# HELP node_hwmon_fan_rpm Hardware monitor for fan revolutions per minute (input)
# TYPE node_hwmon_fan_rpm gauge
node_hwmon_fan_rpm{chip="nct6779",sensor="fan2"} 1098
2017-01-09 18:33:31 +01:00
node_hwmon_fan_rpm{chip="platform_applesmc_768",sensor="fan1"} 0
node_hwmon_fan_rpm{chip="platform_applesmc_768",sensor="fan2"} 1998
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_fan_target_rpm Hardware monitor for fan revolutions per minute (target)
# TYPE node_hwmon_fan_target_rpm gauge
node_hwmon_fan_target_rpm{chip="nct6779",sensor="fan2"} 27000
# HELP node_hwmon_fan_tolerance Hardware monitor fan element tolerance
# TYPE node_hwmon_fan_tolerance gauge
node_hwmon_fan_tolerance{chip="nct6779",sensor="fan2"} 0
# HELP node_hwmon_in_alarm Hardware sensor alarm status (in)
# TYPE node_hwmon_in_alarm gauge
node_hwmon_in_alarm{chip="nct6779",sensor="in0"} 0
node_hwmon_in_alarm{chip="nct6779",sensor="in1"} 1
# HELP node_hwmon_in_beep_enabled Hardware monitor sensor has beeping enabled
# TYPE node_hwmon_in_beep_enabled gauge
node_hwmon_in_beep_enabled{chip="nct6779",sensor="in0"} 0
node_hwmon_in_beep_enabled{chip="nct6779",sensor="in1"} 0
# HELP node_hwmon_in_max_volts Hardware monitor for voltage (max)
# TYPE node_hwmon_in_max_volts gauge
node_hwmon_in_max_volts{chip="nct6779",sensor="in0"} 1.744
node_hwmon_in_max_volts{chip="nct6779",sensor="in1"} 0
# HELP node_hwmon_in_min_volts Hardware monitor for voltage (min)
# TYPE node_hwmon_in_min_volts gauge
node_hwmon_in_min_volts{chip="nct6779",sensor="in0"} 0
node_hwmon_in_min_volts{chip="nct6779",sensor="in1"} 0
# HELP node_hwmon_in_volts Hardware monitor for voltage (input)
# TYPE node_hwmon_in_volts gauge
node_hwmon_in_volts{chip="nct6779",sensor="in0"} 0.792
node_hwmon_in_volts{chip="nct6779",sensor="in1"} 1.024
# HELP node_hwmon_intrusion_alarm Hardware sensor alarm status (intrusion)
# TYPE node_hwmon_intrusion_alarm gauge
node_hwmon_intrusion_alarm{chip="nct6779",sensor="intrusion0"} 1
node_hwmon_intrusion_alarm{chip="nct6779",sensor="intrusion1"} 1
# HELP node_hwmon_intrusion_beep_enabled Hardware monitor sensor has beeping enabled
# TYPE node_hwmon_intrusion_beep_enabled gauge
node_hwmon_intrusion_beep_enabled{chip="nct6779",sensor="intrusion0"} 0
node_hwmon_intrusion_beep_enabled{chip="nct6779",sensor="intrusion1"} 0
# HELP node_hwmon_pwm_auto_point1_pwm Hardware monitor pwm element auto_point1_pwm
# TYPE node_hwmon_pwm_auto_point1_pwm gauge
node_hwmon_pwm_auto_point1_pwm{chip="nct6779",sensor="pwm1"} 153
# HELP node_hwmon_pwm_auto_point1_temp Hardware monitor pwm element auto_point1_temp
# TYPE node_hwmon_pwm_auto_point1_temp gauge
node_hwmon_pwm_auto_point1_temp{chip="nct6779",sensor="pwm1"} 30000
# HELP node_hwmon_pwm_auto_point2_pwm Hardware monitor pwm element auto_point2_pwm
# TYPE node_hwmon_pwm_auto_point2_pwm gauge
node_hwmon_pwm_auto_point2_pwm{chip="nct6779",sensor="pwm1"} 255
# HELP node_hwmon_pwm_auto_point2_temp Hardware monitor pwm element auto_point2_temp
# TYPE node_hwmon_pwm_auto_point2_temp gauge
node_hwmon_pwm_auto_point2_temp{chip="nct6779",sensor="pwm1"} 70000
# HELP node_hwmon_pwm_auto_point3_pwm Hardware monitor pwm element auto_point3_pwm
# TYPE node_hwmon_pwm_auto_point3_pwm gauge
node_hwmon_pwm_auto_point3_pwm{chip="nct6779",sensor="pwm1"} 255
# HELP node_hwmon_pwm_auto_point3_temp Hardware monitor pwm element auto_point3_temp
# TYPE node_hwmon_pwm_auto_point3_temp gauge
node_hwmon_pwm_auto_point3_temp{chip="nct6779",sensor="pwm1"} 70000
# HELP node_hwmon_pwm_auto_point4_pwm Hardware monitor pwm element auto_point4_pwm
# TYPE node_hwmon_pwm_auto_point4_pwm gauge
node_hwmon_pwm_auto_point4_pwm{chip="nct6779",sensor="pwm1"} 255
# HELP node_hwmon_pwm_auto_point4_temp Hardware monitor pwm element auto_point4_temp
# TYPE node_hwmon_pwm_auto_point4_temp gauge
node_hwmon_pwm_auto_point4_temp{chip="nct6779",sensor="pwm1"} 70000
# HELP node_hwmon_pwm_auto_point5_pwm Hardware monitor pwm element auto_point5_pwm
# TYPE node_hwmon_pwm_auto_point5_pwm gauge
node_hwmon_pwm_auto_point5_pwm{chip="nct6779",sensor="pwm1"} 255
# HELP node_hwmon_pwm_auto_point5_temp Hardware monitor pwm element auto_point5_temp
# TYPE node_hwmon_pwm_auto_point5_temp gauge
node_hwmon_pwm_auto_point5_temp{chip="nct6779",sensor="pwm1"} 75000
# HELP node_hwmon_pwm_crit_temp_tolerance Hardware monitor pwm element crit_temp_tolerance
# TYPE node_hwmon_pwm_crit_temp_tolerance gauge
node_hwmon_pwm_crit_temp_tolerance{chip="nct6779",sensor="pwm1"} 2000
# HELP node_hwmon_pwm_enable Hardware monitor pwm element enable
# TYPE node_hwmon_pwm_enable gauge
node_hwmon_pwm_enable{chip="nct6779",sensor="pwm1"} 5
# HELP node_hwmon_pwm_floor Hardware monitor pwm element floor
# TYPE node_hwmon_pwm_floor gauge
node_hwmon_pwm_floor{chip="nct6779",sensor="pwm1"} 1
# HELP node_hwmon_pwm_mode Hardware monitor pwm element mode
# TYPE node_hwmon_pwm_mode gauge
node_hwmon_pwm_mode{chip="nct6779",sensor="pwm1"} 1
# HELP node_hwmon_pwm_start Hardware monitor pwm element start
# TYPE node_hwmon_pwm_start gauge
node_hwmon_pwm_start{chip="nct6779",sensor="pwm1"} 1
# HELP node_hwmon_pwm_step_down_time Hardware monitor pwm element step_down_time
# TYPE node_hwmon_pwm_step_down_time gauge
node_hwmon_pwm_step_down_time{chip="nct6779",sensor="pwm1"} 100
# HELP node_hwmon_pwm_step_up_time Hardware monitor pwm element step_up_time
# TYPE node_hwmon_pwm_step_up_time gauge
node_hwmon_pwm_step_up_time{chip="nct6779",sensor="pwm1"} 100
# HELP node_hwmon_pwm_stop_time Hardware monitor pwm element stop_time
# TYPE node_hwmon_pwm_stop_time gauge
node_hwmon_pwm_stop_time{chip="nct6779",sensor="pwm1"} 6000
# HELP node_hwmon_pwm_target_temp Hardware monitor pwm element target_temp
# TYPE node_hwmon_pwm_target_temp gauge
node_hwmon_pwm_target_temp{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_temp_sel Hardware monitor pwm element temp_sel
# TYPE node_hwmon_pwm_temp_sel gauge
node_hwmon_pwm_temp_sel{chip="nct6779",sensor="pwm1"} 7
# HELP node_hwmon_pwm_temp_tolerance Hardware monitor pwm element temp_tolerance
# TYPE node_hwmon_pwm_temp_tolerance gauge
node_hwmon_pwm_temp_tolerance{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_weight_duty_base Hardware monitor pwm element weight_duty_base
# TYPE node_hwmon_pwm_weight_duty_base gauge
node_hwmon_pwm_weight_duty_base{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_weight_duty_step Hardware monitor pwm element weight_duty_step
# TYPE node_hwmon_pwm_weight_duty_step gauge
node_hwmon_pwm_weight_duty_step{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_weight_temp_sel Hardware monitor pwm element weight_temp_sel
# TYPE node_hwmon_pwm_weight_temp_sel gauge
node_hwmon_pwm_weight_temp_sel{chip="nct6779",sensor="pwm1"} 1
# HELP node_hwmon_pwm_weight_temp_step Hardware monitor pwm element weight_temp_step
# TYPE node_hwmon_pwm_weight_temp_step gauge
node_hwmon_pwm_weight_temp_step{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_weight_temp_step_base Hardware monitor pwm element weight_temp_step_base
# TYPE node_hwmon_pwm_weight_temp_step_base gauge
node_hwmon_pwm_weight_temp_step_base{chip="nct6779",sensor="pwm1"} 0
# HELP node_hwmon_pwm_weight_temp_step_tol Hardware monitor pwm element weight_temp_step_tol
# TYPE node_hwmon_pwm_weight_temp_step_tol gauge
node_hwmon_pwm_weight_temp_step_tol{chip="nct6779",sensor="pwm1"} 0
2017-01-09 18:33:31 +01:00
# HELP node_hwmon_sensor_label Label for given chip and sensor
# TYPE node_hwmon_sensor_label gauge
node_hwmon_sensor_label{chip="hwmon4",label="foosensor",sensor="temp1"} 1
node_hwmon_sensor_label{chip="hwmon4",label="foosensor",sensor="temp2"} 1
node_hwmon_sensor_label{chip="platform_applesmc_768",label="left_side",sensor="fan1"} 1
node_hwmon_sensor_label{chip="platform_applesmc_768",label="right_side",sensor="fan2"} 1
node_hwmon_sensor_label{chip="platform_coretemp_0",label="core_0",sensor="temp2"} 1
node_hwmon_sensor_label{chip="platform_coretemp_0",label="core_1",sensor="temp3"} 1
node_hwmon_sensor_label{chip="platform_coretemp_0",label="core_2",sensor="temp4"} 1
node_hwmon_sensor_label{chip="platform_coretemp_0",label="core_3",sensor="temp5"} 1
node_hwmon_sensor_label{chip="platform_coretemp_0",label="physical_id_0",sensor="temp1"} 1
node_hwmon_sensor_label{chip="platform_coretemp_1",label="core_0",sensor="temp2"} 1
node_hwmon_sensor_label{chip="platform_coretemp_1",label="core_1",sensor="temp3"} 1
node_hwmon_sensor_label{chip="platform_coretemp_1",label="core_2",sensor="temp4"} 1
node_hwmon_sensor_label{chip="platform_coretemp_1",label="core_3",sensor="temp5"} 1
node_hwmon_sensor_label{chip="platform_coretemp_1",label="physical_id_0",sensor="temp1"} 1
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_temp_celsius Hardware monitor for temperature (input)
# TYPE node_hwmon_temp_celsius gauge
2017-01-09 18:33:31 +01:00
node_hwmon_temp_celsius{chip="hwmon4",sensor="temp1"} 55
node_hwmon_temp_celsius{chip="hwmon4",sensor="temp2"} 54
node_hwmon_temp_celsius{chip="platform_coretemp_0",sensor="temp1"} 55
node_hwmon_temp_celsius{chip="platform_coretemp_0",sensor="temp2"} 54
node_hwmon_temp_celsius{chip="platform_coretemp_0",sensor="temp3"} 52
node_hwmon_temp_celsius{chip="platform_coretemp_0",sensor="temp4"} 53
node_hwmon_temp_celsius{chip="platform_coretemp_0",sensor="temp5"} 50
node_hwmon_temp_celsius{chip="platform_coretemp_1",sensor="temp1"} 55
node_hwmon_temp_celsius{chip="platform_coretemp_1",sensor="temp2"} 54
node_hwmon_temp_celsius{chip="platform_coretemp_1",sensor="temp3"} 52
node_hwmon_temp_celsius{chip="platform_coretemp_1",sensor="temp4"} 53
node_hwmon_temp_celsius{chip="platform_coretemp_1",sensor="temp5"} 50
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_temp_crit_alarm_celsius Hardware monitor for temperature (crit_alarm)
# TYPE node_hwmon_temp_crit_alarm_celsius gauge
2017-01-09 18:33:31 +01:00
node_hwmon_temp_crit_alarm_celsius{chip="hwmon4",sensor="temp1"} 0
node_hwmon_temp_crit_alarm_celsius{chip="hwmon4",sensor="temp2"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_0",sensor="temp1"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_0",sensor="temp2"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_0",sensor="temp3"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_0",sensor="temp4"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_0",sensor="temp5"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_1",sensor="temp1"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_1",sensor="temp2"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_1",sensor="temp3"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_1",sensor="temp4"} 0
node_hwmon_temp_crit_alarm_celsius{chip="platform_coretemp_1",sensor="temp5"} 0
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_temp_crit_celsius Hardware monitor for temperature (crit)
# TYPE node_hwmon_temp_crit_celsius gauge
2017-01-09 18:33:31 +01:00
node_hwmon_temp_crit_celsius{chip="hwmon4",sensor="temp1"} 100
node_hwmon_temp_crit_celsius{chip="hwmon4",sensor="temp2"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_0",sensor="temp1"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_0",sensor="temp2"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_0",sensor="temp3"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_0",sensor="temp4"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_0",sensor="temp5"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_1",sensor="temp1"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_1",sensor="temp2"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_1",sensor="temp3"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_1",sensor="temp4"} 100
node_hwmon_temp_crit_celsius{chip="platform_coretemp_1",sensor="temp5"} 100
2016-10-06 17:33:24 +02:00
# HELP node_hwmon_temp_max_celsius Hardware monitor for temperature (max)
# TYPE node_hwmon_temp_max_celsius gauge
2017-01-09 18:33:31 +01:00
node_hwmon_temp_max_celsius{chip="hwmon4",sensor="temp1"} 100
node_hwmon_temp_max_celsius{chip="hwmon4",sensor="temp2"} 100
node_hwmon_temp_max_celsius{chip="platform_coretemp_0",sensor="temp1"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_0",sensor="temp2"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_0",sensor="temp3"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_0",sensor="temp4"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_0",sensor="temp5"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_1",sensor="temp1"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_1",sensor="temp2"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_1",sensor="temp3"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_1",sensor="temp4"} 84
node_hwmon_temp_max_celsius{chip="platform_coretemp_1",sensor="temp5"} 84
2020-02-19 15:18:44 +01:00
# HELP node_infiniband_info Non-numeric data from /sys/class/infiniband/<device>, value is always 1.
# TYPE node_infiniband_info gauge
node_infiniband_info{board_id="I40IW Board ID",device="i40iw0",firmware_version="0.2",hca_type="I40IW"} 1
node_infiniband_info{board_id="SM_1141000001000",device="mlx4_0",firmware_version="2.31.5050",hca_type="MT4099"} 1
2017-03-09 23:10:36 +01:00
# HELP node_infiniband_legacy_data_received_bytes_total Number of data octets received on all links
# TYPE node_infiniband_legacy_data_received_bytes_total counter
2017-05-12 07:28:53 +02:00
node_infiniband_legacy_data_received_bytes_total{device="mlx4_0",port="1"} 1.8527668e+07
node_infiniband_legacy_data_received_bytes_total{device="mlx4_0",port="2"} 1.8527668e+07
2017-03-09 23:10:36 +01:00
# HELP node_infiniband_legacy_data_transmitted_bytes_total Number of data octets transmitted on all links
# TYPE node_infiniband_legacy_data_transmitted_bytes_total counter
2017-05-12 07:28:53 +02:00
node_infiniband_legacy_data_transmitted_bytes_total{device="mlx4_0",port="1"} 1.493376e+07
node_infiniband_legacy_data_transmitted_bytes_total{device="mlx4_0",port="2"} 1.493376e+07
2017-03-09 23:10:36 +01:00
# HELP node_infiniband_legacy_multicast_packets_received_total Number of multicast packets received
# TYPE node_infiniband_legacy_multicast_packets_received_total counter
node_infiniband_legacy_multicast_packets_received_total{device="mlx4_0",port="1"} 93
node_infiniband_legacy_multicast_packets_received_total{device="mlx4_0",port="2"} 93
# HELP node_infiniband_legacy_multicast_packets_transmitted_total Number of multicast packets transmitted
# TYPE node_infiniband_legacy_multicast_packets_transmitted_total counter
node_infiniband_legacy_multicast_packets_transmitted_total{device="mlx4_0",port="1"} 16
node_infiniband_legacy_multicast_packets_transmitted_total{device="mlx4_0",port="2"} 16
# HELP node_infiniband_legacy_packets_received_total Number of data packets received on all links
# TYPE node_infiniband_legacy_packets_received_total counter
node_infiniband_legacy_packets_received_total{device="mlx4_0",port="1"} 0
node_infiniband_legacy_packets_received_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_legacy_packets_transmitted_total Number of data packets received on all links
# TYPE node_infiniband_legacy_packets_transmitted_total counter
node_infiniband_legacy_packets_transmitted_total{device="mlx4_0",port="1"} 0
node_infiniband_legacy_packets_transmitted_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_legacy_unicast_packets_received_total Number of unicast packets received
# TYPE node_infiniband_legacy_unicast_packets_received_total counter
node_infiniband_legacy_unicast_packets_received_total{device="mlx4_0",port="1"} 61148
node_infiniband_legacy_unicast_packets_received_total{device="mlx4_0",port="2"} 61148
# HELP node_infiniband_legacy_unicast_packets_transmitted_total Number of unicast packets transmitted
# TYPE node_infiniband_legacy_unicast_packets_transmitted_total counter
node_infiniband_legacy_unicast_packets_transmitted_total{device="mlx4_0",port="1"} 61239
node_infiniband_legacy_unicast_packets_transmitted_total{device="mlx4_0",port="2"} 61239
2017-02-07 17:46:51 +01:00
# HELP node_infiniband_link_downed_total Number of times the link failed to recover from an error state and went down
# TYPE node_infiniband_link_downed_total counter
node_infiniband_link_downed_total{device="mlx4_0",port="1"} 0
node_infiniband_link_downed_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_link_error_recovery_total Number of times the link successfully recovered from an error state
# TYPE node_infiniband_link_error_recovery_total counter
node_infiniband_link_error_recovery_total{device="mlx4_0",port="1"} 0
node_infiniband_link_error_recovery_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_multicast_packets_received_total Number of multicast packets received (including errors)
# TYPE node_infiniband_multicast_packets_received_total counter
node_infiniband_multicast_packets_received_total{device="mlx4_0",port="1"} 93
node_infiniband_multicast_packets_received_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_multicast_packets_transmitted_total Number of multicast packets transmitted (including errors)
# TYPE node_infiniband_multicast_packets_transmitted_total counter
node_infiniband_multicast_packets_transmitted_total{device="mlx4_0",port="1"} 16
node_infiniband_multicast_packets_transmitted_total{device="mlx4_0",port="2"} 0
2019-11-22 22:52:17 +01:00
# HELP node_infiniband_physical_state_id Physical state of the InfiniBand port (0: no change, 1: sleep, 2: polling, 3: disable, 4: shift, 5: link up, 6: link error recover, 7: phytest)
# TYPE node_infiniband_physical_state_id gauge
node_infiniband_physical_state_id{device="i40iw0",port="1"} 5
node_infiniband_physical_state_id{device="mlx4_0",port="1"} 5
node_infiniband_physical_state_id{device="mlx4_0",port="2"} 5
2018-10-30 21:54:09 +01:00
# HELP node_infiniband_port_constraint_errors_received_total Number of packets received on the switch physical port that are discarded
# TYPE node_infiniband_port_constraint_errors_received_total counter
node_infiniband_port_constraint_errors_received_total{device="mlx4_0",port="1"} 0
# HELP node_infiniband_port_constraint_errors_transmitted_total Number of packets not transmitted from the switch physical port
# TYPE node_infiniband_port_constraint_errors_transmitted_total counter
node_infiniband_port_constraint_errors_transmitted_total{device="mlx4_0",port="1"} 0
2018-01-17 17:55:55 +01:00
# HELP node_infiniband_port_data_received_bytes_total Number of data octets received on all links
# TYPE node_infiniband_port_data_received_bytes_total counter
node_infiniband_port_data_received_bytes_total{device="mlx4_0",port="1"} 1.8527668e+07
node_infiniband_port_data_received_bytes_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_port_data_transmitted_bytes_total Number of data octets transmitted on all links
# TYPE node_infiniband_port_data_transmitted_bytes_total counter
node_infiniband_port_data_transmitted_bytes_total{device="mlx4_0",port="1"} 1.493376e+07
node_infiniband_port_data_transmitted_bytes_total{device="mlx4_0",port="2"} 0
2018-10-30 21:54:09 +01:00
# HELP node_infiniband_port_discards_received_total Number of inbound packets discarded by the port because the port is down or congested
# TYPE node_infiniband_port_discards_received_total counter
node_infiniband_port_discards_received_total{device="mlx4_0",port="1"} 0
# HELP node_infiniband_port_discards_transmitted_total Number of outbound packets discarded by the port because the port is down or congested
# TYPE node_infiniband_port_discards_transmitted_total counter
node_infiniband_port_discards_transmitted_total{device="mlx4_0",port="1"} 5
# HELP node_infiniband_port_errors_received_total Number of packets containing an error that were received on this port
# TYPE node_infiniband_port_errors_received_total counter
node_infiniband_port_errors_received_total{device="mlx4_0",port="1"} 0
# HELP node_infiniband_port_packets_received_total Number of packets received on all VLs by this port (including errors)
# TYPE node_infiniband_port_packets_received_total counter
node_infiniband_port_packets_received_total{device="mlx4_0",port="1"} 6.825908347e+09
# HELP node_infiniband_port_packets_transmitted_total Number of packets transmitted on all VLs from this port (including errors)
# TYPE node_infiniband_port_packets_transmitted_total counter
node_infiniband_port_packets_transmitted_total{device="mlx4_0",port="1"} 6.235865e+06
# HELP node_infiniband_port_transmit_wait_total Number of ticks during which the port had data to transmit but no data was sent during the entire tick
# TYPE node_infiniband_port_transmit_wait_total counter
node_infiniband_port_transmit_wait_total{device="mlx4_0",port="1"} 4.294967295e+09
2019-11-22 22:52:17 +01:00
# HELP node_infiniband_rate_bytes_per_second Maximum signal transfer rate
# TYPE node_infiniband_rate_bytes_per_second gauge
node_infiniband_rate_bytes_per_second{device="i40iw0",port="1"} 1.25e+09
node_infiniband_rate_bytes_per_second{device="mlx4_0",port="1"} 5e+09
node_infiniband_rate_bytes_per_second{device="mlx4_0",port="2"} 5e+09
# HELP node_infiniband_state_id State of the InfiniBand port (0: no change, 1: down, 2: init, 3: armed, 4: active, 5: act defer)
# TYPE node_infiniband_state_id gauge
node_infiniband_state_id{device="i40iw0",port="1"} 4
node_infiniband_state_id{device="mlx4_0",port="1"} 4
node_infiniband_state_id{device="mlx4_0",port="2"} 4
2017-02-07 17:46:51 +01:00
# HELP node_infiniband_unicast_packets_received_total Number of unicast packets received (including errors)
# TYPE node_infiniband_unicast_packets_received_total counter
node_infiniband_unicast_packets_received_total{device="mlx4_0",port="1"} 61148
node_infiniband_unicast_packets_received_total{device="mlx4_0",port="2"} 0
# HELP node_infiniband_unicast_packets_transmitted_total Number of unicast packets transmitted (including errors)
# TYPE node_infiniband_unicast_packets_transmitted_total counter
node_infiniband_unicast_packets_transmitted_total{device="mlx4_0",port="1"} 61239
node_infiniband_unicast_packets_transmitted_total{device="mlx4_0",port="2"} 0
2018-01-17 17:55:55 +01:00
# HELP node_interrupts_total Interrupt details.
# TYPE node_interrupts_total counter
2018-03-08 15:04:49 +01:00
node_interrupts_total{cpu="0",devices="",info="APIC ICR read retries",type="RTR"} 0
node_interrupts_total{cpu="0",devices="",info="Function call interrupts",type="CAL"} 148554
node_interrupts_total{cpu="0",devices="",info="IRQ work interrupts",type="IWI"} 1.509379e+06
node_interrupts_total{cpu="0",devices="",info="Local timer interrupts",type="LOC"} 1.74326351e+08
node_interrupts_total{cpu="0",devices="",info="Machine check exceptions",type="MCE"} 0
node_interrupts_total{cpu="0",devices="",info="Machine check polls",type="MCP"} 2406
node_interrupts_total{cpu="0",devices="",info="Non-maskable interrupts",type="NMI"} 47
node_interrupts_total{cpu="0",devices="",info="Performance monitoring interrupts",type="PMI"} 47
node_interrupts_total{cpu="0",devices="",info="Rescheduling interrupts",type="RES"} 1.0847134e+07
node_interrupts_total{cpu="0",devices="",info="Spurious interrupts",type="SPU"} 0
node_interrupts_total{cpu="0",devices="",info="TLB shootdowns",type="TLB"} 1.0460334e+07
node_interrupts_total{cpu="0",devices="",info="Thermal event interrupts",type="TRM"} 0
node_interrupts_total{cpu="0",devices="",info="Threshold APIC interrupts",type="THR"} 0
node_interrupts_total{cpu="0",devices="acpi",info="IR-IO-APIC-fasteoi",type="9"} 398553
node_interrupts_total{cpu="0",devices="ahci",info="IR-PCI-MSI-edge",type="43"} 7.434032e+06
node_interrupts_total{cpu="0",devices="dmar0",info="DMAR_MSI-edge",type="40"} 0
node_interrupts_total{cpu="0",devices="dmar1",info="DMAR_MSI-edge",type="41"} 0
node_interrupts_total{cpu="0",devices="ehci_hcd:usb1, mmc0",info="IR-IO-APIC-fasteoi",type="16"} 328511
node_interrupts_total{cpu="0",devices="ehci_hcd:usb2",info="IR-IO-APIC-fasteoi",type="23"} 1.451445e+06
node_interrupts_total{cpu="0",devices="i8042",info="IR-IO-APIC-edge",type="1"} 17960
node_interrupts_total{cpu="0",devices="i8042",info="IR-IO-APIC-edge",type="12"} 380847
node_interrupts_total{cpu="0",devices="i915",info="IR-PCI-MSI-edge",type="44"} 140636
node_interrupts_total{cpu="0",devices="iwlwifi",info="IR-PCI-MSI-edge",type="46"} 4.3078464e+07
node_interrupts_total{cpu="0",devices="mei_me",info="IR-PCI-MSI-edge",type="45"} 4
node_interrupts_total{cpu="0",devices="rtc0",info="IR-IO-APIC-edge",type="8"} 1
node_interrupts_total{cpu="0",devices="snd_hda_intel",info="IR-PCI-MSI-edge",type="47"} 350
node_interrupts_total{cpu="0",devices="timer",info="IR-IO-APIC-edge",type="0"} 18
node_interrupts_total{cpu="0",devices="xhci_hcd",info="IR-PCI-MSI-edge",type="42"} 378324
node_interrupts_total{cpu="1",devices="",info="APIC ICR read retries",type="RTR"} 0
node_interrupts_total{cpu="1",devices="",info="Function call interrupts",type="CAL"} 157441
node_interrupts_total{cpu="1",devices="",info="IRQ work interrupts",type="IWI"} 2.411776e+06
node_interrupts_total{cpu="1",devices="",info="Local timer interrupts",type="LOC"} 1.35776678e+08
node_interrupts_total{cpu="1",devices="",info="Machine check exceptions",type="MCE"} 0
node_interrupts_total{cpu="1",devices="",info="Machine check polls",type="MCP"} 2399
node_interrupts_total{cpu="1",devices="",info="Non-maskable interrupts",type="NMI"} 5031
node_interrupts_total{cpu="1",devices="",info="Performance monitoring interrupts",type="PMI"} 5031
node_interrupts_total{cpu="1",devices="",info="Rescheduling interrupts",type="RES"} 9.111507e+06
node_interrupts_total{cpu="1",devices="",info="Spurious interrupts",type="SPU"} 0
node_interrupts_total{cpu="1",devices="",info="TLB shootdowns",type="TLB"} 9.918429e+06
node_interrupts_total{cpu="1",devices="",info="Thermal event interrupts",type="TRM"} 0
node_interrupts_total{cpu="1",devices="",info="Threshold APIC interrupts",type="THR"} 0
node_interrupts_total{cpu="1",devices="acpi",info="IR-IO-APIC-fasteoi",type="9"} 2320
node_interrupts_total{cpu="1",devices="ahci",info="IR-PCI-MSI-edge",type="43"} 8.092205e+06
node_interrupts_total{cpu="1",devices="dmar0",info="DMAR_MSI-edge",type="40"} 0
node_interrupts_total{cpu="1",devices="dmar1",info="DMAR_MSI-edge",type="41"} 0
node_interrupts_total{cpu="1",devices="ehci_hcd:usb1, mmc0",info="IR-IO-APIC-fasteoi",type="16"} 322879
node_interrupts_total{cpu="1",devices="ehci_hcd:usb2",info="IR-IO-APIC-fasteoi",type="23"} 3.333499e+06
node_interrupts_total{cpu="1",devices="i8042",info="IR-IO-APIC-edge",type="1"} 105
node_interrupts_total{cpu="1",devices="i8042",info="IR-IO-APIC-edge",type="12"} 1021
node_interrupts_total{cpu="1",devices="i915",info="IR-PCI-MSI-edge",type="44"} 226313
node_interrupts_total{cpu="1",devices="iwlwifi",info="IR-PCI-MSI-edge",type="46"} 130
node_interrupts_total{cpu="1",devices="mei_me",info="IR-PCI-MSI-edge",type="45"} 22
node_interrupts_total{cpu="1",devices="rtc0",info="IR-IO-APIC-edge",type="8"} 0
node_interrupts_total{cpu="1",devices="snd_hda_intel",info="IR-PCI-MSI-edge",type="47"} 224
node_interrupts_total{cpu="1",devices="timer",info="IR-IO-APIC-edge",type="0"} 0
node_interrupts_total{cpu="1",devices="xhci_hcd",info="IR-PCI-MSI-edge",type="42"} 1.734637e+06
node_interrupts_total{cpu="2",devices="",info="APIC ICR read retries",type="RTR"} 0
node_interrupts_total{cpu="2",devices="",info="Function call interrupts",type="CAL"} 142912
node_interrupts_total{cpu="2",devices="",info="IRQ work interrupts",type="IWI"} 1.512975e+06
node_interrupts_total{cpu="2",devices="",info="Local timer interrupts",type="LOC"} 1.68393257e+08
node_interrupts_total{cpu="2",devices="",info="Machine check exceptions",type="MCE"} 0
node_interrupts_total{cpu="2",devices="",info="Machine check polls",type="MCP"} 2399
node_interrupts_total{cpu="2",devices="",info="Non-maskable interrupts",type="NMI"} 6211
node_interrupts_total{cpu="2",devices="",info="Performance monitoring interrupts",type="PMI"} 6211
node_interrupts_total{cpu="2",devices="",info="Rescheduling interrupts",type="RES"} 1.5999335e+07
node_interrupts_total{cpu="2",devices="",info="Spurious interrupts",type="SPU"} 0
node_interrupts_total{cpu="2",devices="",info="TLB shootdowns",type="TLB"} 1.0494258e+07
node_interrupts_total{cpu="2",devices="",info="Thermal event interrupts",type="TRM"} 0
node_interrupts_total{cpu="2",devices="",info="Threshold APIC interrupts",type="THR"} 0
node_interrupts_total{cpu="2",devices="acpi",info="IR-IO-APIC-fasteoi",type="9"} 824
node_interrupts_total{cpu="2",devices="ahci",info="IR-PCI-MSI-edge",type="43"} 6.478877e+06
node_interrupts_total{cpu="2",devices="dmar0",info="DMAR_MSI-edge",type="40"} 0
node_interrupts_total{cpu="2",devices="dmar1",info="DMAR_MSI-edge",type="41"} 0
node_interrupts_total{cpu="2",devices="ehci_hcd:usb1, mmc0",info="IR-IO-APIC-fasteoi",type="16"} 293782
node_interrupts_total{cpu="2",devices="ehci_hcd:usb2",info="IR-IO-APIC-fasteoi",type="23"} 1.092032e+06
node_interrupts_total{cpu="2",devices="i8042",info="IR-IO-APIC-edge",type="1"} 28
node_interrupts_total{cpu="2",devices="i8042",info="IR-IO-APIC-edge",type="12"} 240
node_interrupts_total{cpu="2",devices="i915",info="IR-PCI-MSI-edge",type="44"} 347
node_interrupts_total{cpu="2",devices="iwlwifi",info="IR-PCI-MSI-edge",type="46"} 460171
node_interrupts_total{cpu="2",devices="mei_me",info="IR-PCI-MSI-edge",type="45"} 0
node_interrupts_total{cpu="2",devices="rtc0",info="IR-IO-APIC-edge",type="8"} 0
node_interrupts_total{cpu="2",devices="snd_hda_intel",info="IR-PCI-MSI-edge",type="47"} 0
node_interrupts_total{cpu="2",devices="timer",info="IR-IO-APIC-edge",type="0"} 0
node_interrupts_total{cpu="2",devices="xhci_hcd",info="IR-PCI-MSI-edge",type="42"} 440240
node_interrupts_total{cpu="3",devices="",info="APIC ICR read retries",type="RTR"} 0
node_interrupts_total{cpu="3",devices="",info="Function call interrupts",type="CAL"} 155528
node_interrupts_total{cpu="3",devices="",info="IRQ work interrupts",type="IWI"} 2.428828e+06
node_interrupts_total{cpu="3",devices="",info="Local timer interrupts",type="LOC"} 1.30980079e+08
node_interrupts_total{cpu="3",devices="",info="Machine check exceptions",type="MCE"} 0
node_interrupts_total{cpu="3",devices="",info="Machine check polls",type="MCP"} 2399
node_interrupts_total{cpu="3",devices="",info="Non-maskable interrupts",type="NMI"} 4968
node_interrupts_total{cpu="3",devices="",info="Performance monitoring interrupts",type="PMI"} 4968
node_interrupts_total{cpu="3",devices="",info="Rescheduling interrupts",type="RES"} 7.45726e+06
node_interrupts_total{cpu="3",devices="",info="Spurious interrupts",type="SPU"} 0
node_interrupts_total{cpu="3",devices="",info="TLB shootdowns",type="TLB"} 1.0345022e+07
node_interrupts_total{cpu="3",devices="",info="Thermal event interrupts",type="TRM"} 0
node_interrupts_total{cpu="3",devices="",info="Threshold APIC interrupts",type="THR"} 0
node_interrupts_total{cpu="3",devices="acpi",info="IR-IO-APIC-fasteoi",type="9"} 863
node_interrupts_total{cpu="3",devices="ahci",info="IR-PCI-MSI-edge",type="43"} 7.492252e+06
node_interrupts_total{cpu="3",devices="dmar0",info="DMAR_MSI-edge",type="40"} 0
node_interrupts_total{cpu="3",devices="dmar1",info="DMAR_MSI-edge",type="41"} 0
node_interrupts_total{cpu="3",devices="ehci_hcd:usb1, mmc0",info="IR-IO-APIC-fasteoi",type="16"} 351412
node_interrupts_total{cpu="3",devices="ehci_hcd:usb2",info="IR-IO-APIC-fasteoi",type="23"} 2.644609e+06
node_interrupts_total{cpu="3",devices="i8042",info="IR-IO-APIC-edge",type="1"} 28
node_interrupts_total{cpu="3",devices="i8042",info="IR-IO-APIC-edge",type="12"} 198
node_interrupts_total{cpu="3",devices="i915",info="IR-PCI-MSI-edge",type="44"} 633
node_interrupts_total{cpu="3",devices="iwlwifi",info="IR-PCI-MSI-edge",type="46"} 290
node_interrupts_total{cpu="3",devices="mei_me",info="IR-PCI-MSI-edge",type="45"} 0
node_interrupts_total{cpu="3",devices="rtc0",info="IR-IO-APIC-edge",type="8"} 0
node_interrupts_total{cpu="3",devices="snd_hda_intel",info="IR-PCI-MSI-edge",type="47"} 0
node_interrupts_total{cpu="3",devices="timer",info="IR-IO-APIC-edge",type="0"} 0
node_interrupts_total{cpu="3",devices="xhci_hcd",info="IR-PCI-MSI-edge",type="42"} 2.434308e+06
2018-01-17 17:55:55 +01:00
# HELP node_intr_total Total number of interrupts serviced.
# TYPE node_intr_total counter
node_intr_total 8.885917e+06
2017-07-26 15:20:28 +02:00
# HELP node_ipvs_backend_connections_active The current active connections by local and remote address.
# TYPE node_ipvs_backend_connections_active gauge
2019-08-27 14:24:11 +02:00
node_ipvs_backend_connections_active{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.49.32",remote_port="3306"} 321
node_ipvs_backend_connections_active{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.50.26",remote_port="3306"} 64
node_ipvs_backend_connections_active{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.22",remote_port="3306"} 248
node_ipvs_backend_connections_active{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.21",remote_port="3306"} 248
node_ipvs_backend_connections_active{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.24",remote_port="3306"} 248
node_ipvs_backend_connections_active{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.49.32",remote_port="3306"} 0
node_ipvs_backend_connections_active{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.26",remote_port="3306"} 0
node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.21",remote_port="3306"} 1498
node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.21",remote_port="3306"} 1499
node_ipvs_backend_connections_active{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.84.22",remote_port="3306"} 0
2017-07-26 15:20:28 +02:00
# HELP node_ipvs_backend_connections_inactive The current inactive connections by local and remote address.
# TYPE node_ipvs_backend_connections_inactive gauge
2019-08-27 14:24:11 +02:00
node_ipvs_backend_connections_inactive{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.49.32",remote_port="3306"} 5
node_ipvs_backend_connections_inactive{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.50.26",remote_port="3306"} 1
node_ipvs_backend_connections_inactive{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.22",remote_port="3306"} 2
node_ipvs_backend_connections_inactive{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.21",remote_port="3306"} 1
node_ipvs_backend_connections_inactive{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.24",remote_port="3306"} 2
node_ipvs_backend_connections_inactive{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.49.32",remote_port="3306"} 0
node_ipvs_backend_connections_inactive{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.26",remote_port="3306"} 0
node_ipvs_backend_connections_inactive{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.21",remote_port="3306"} 0
node_ipvs_backend_connections_inactive{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.21",remote_port="3306"} 0
node_ipvs_backend_connections_inactive{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.84.22",remote_port="3306"} 0
2017-07-26 15:20:28 +02:00
# HELP node_ipvs_backend_weight The current backend weight by local and remote address.
# TYPE node_ipvs_backend_weight gauge
2019-08-27 14:24:11 +02:00
node_ipvs_backend_weight{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.49.32",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="",local_mark="10001000",local_port="0",proto="FWM",remote_address="192.168.50.26",remote_port="3306"} 20
node_ipvs_backend_weight{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.22",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.21",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.22",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.83.24",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.49.32",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.55",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.26",remote_port="3306"} 0
node_ipvs_backend_weight{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.50.21",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.82.21",remote_port="3306"} 100
node_ipvs_backend_weight{local_address="192.168.0.57",local_mark="",local_port="3306",proto="TCP",remote_address="192.168.84.22",remote_port="3306"} 0
2017-07-26 15:20:28 +02:00
# HELP node_ipvs_connections_total The total number of connections made.
# TYPE node_ipvs_connections_total counter
node_ipvs_connections_total 2.3765872e+07
# HELP node_ipvs_incoming_bytes_total The total amount of incoming data.
# TYPE node_ipvs_incoming_bytes_total counter
node_ipvs_incoming_bytes_total 8.9991519156915e+13
# HELP node_ipvs_incoming_packets_total The total number of incoming packets.
# TYPE node_ipvs_incoming_packets_total counter
node_ipvs_incoming_packets_total 3.811989221e+09
# HELP node_ipvs_outgoing_bytes_total The total amount of outgoing data.
# TYPE node_ipvs_outgoing_bytes_total counter
node_ipvs_outgoing_bytes_total 0
# HELP node_ipvs_outgoing_packets_total The total number of outgoing packets.
# TYPE node_ipvs_outgoing_packets_total counter
node_ipvs_outgoing_packets_total 0
2015-11-11 14:02:41 +01:00
# HELP node_ksmd_full_scans_total ksmd 'full_scans' file.
# TYPE node_ksmd_full_scans_total counter
node_ksmd_full_scans_total 323
# HELP node_ksmd_merge_across_nodes ksmd 'merge_across_nodes' file.
# TYPE node_ksmd_merge_across_nodes gauge
node_ksmd_merge_across_nodes 1
# HELP node_ksmd_pages_shared ksmd 'pages_shared' file.
# TYPE node_ksmd_pages_shared gauge
node_ksmd_pages_shared 1
# HELP node_ksmd_pages_sharing ksmd 'pages_sharing' file.
# TYPE node_ksmd_pages_sharing gauge
node_ksmd_pages_sharing 255
# HELP node_ksmd_pages_to_scan ksmd 'pages_to_scan' file.
# TYPE node_ksmd_pages_to_scan gauge
node_ksmd_pages_to_scan 100
# HELP node_ksmd_pages_unshared ksmd 'pages_unshared' file.
# TYPE node_ksmd_pages_unshared gauge
node_ksmd_pages_unshared 0
# HELP node_ksmd_pages_volatile ksmd 'pages_volatile' file.
# TYPE node_ksmd_pages_volatile gauge
node_ksmd_pages_volatile 0
# HELP node_ksmd_run ksmd 'run' file.
# TYPE node_ksmd_run gauge
node_ksmd_run 1
# HELP node_ksmd_sleep_seconds ksmd 'sleep_millisecs' file.
# TYPE node_ksmd_sleep_seconds gauge
node_ksmd_sleep_seconds 0.02
2015-09-26 20:54:49 +02:00
# HELP node_load1 1m load average.
# TYPE node_load1 gauge
node_load1 0.21
2015-10-19 21:31:54 +02:00
# HELP node_load15 15m load average.
# TYPE node_load15 gauge
node_load15 0.39
# HELP node_load5 5m load average.
# TYPE node_load5 gauge
node_load5 0.37
2015-09-26 20:54:49 +02:00
# HELP node_md_blocks Total number of blocks on device.
# TYPE node_md_blocks gauge
node_md_blocks{device="md0"} 248896
2016-11-15 17:36:56 +01:00
node_md_blocks{device="md00"} 4.186624e+06
2016-06-06 23:18:09 +02:00
node_md_blocks{device="md10"} 3.14159265e+08
2019-07-01 11:56:06 +02:00
node_md_blocks{device="md101"} 322560
2016-09-18 08:17:00 +02:00
node_md_blocks{device="md11"} 4.190208e+06
2016-11-14 14:30:33 +01:00
node_md_blocks{device="md12"} 3.886394368e+09
2018-07-02 12:38:20 +02:00
node_md_blocks{device="md120"} 2.095104e+06
2017-07-20 17:04:33 +02:00
node_md_blocks{device="md126"} 1.855870976e+09
2015-09-26 20:54:49 +02:00
node_md_blocks{device="md127"} 3.12319552e+08
2016-09-20 19:39:05 +02:00
node_md_blocks{device="md219"} 7932
2015-09-26 20:54:49 +02:00
node_md_blocks{device="md3"} 5.853468288e+09
node_md_blocks{device="md4"} 4.883648e+06
node_md_blocks{device="md6"} 1.95310144e+08
node_md_blocks{device="md7"} 7.813735424e+09
node_md_blocks{device="md8"} 1.95310144e+08
2015-11-24 10:42:56 +01:00
node_md_blocks{device="md9"} 523968
2015-09-26 20:54:49 +02:00
# HELP node_md_blocks_synced Number of blocks synced on device.
# TYPE node_md_blocks_synced gauge
node_md_blocks_synced{device="md0"} 248896
2016-11-15 17:36:56 +01:00
node_md_blocks_synced{device="md00"} 4.186624e+06
2016-06-06 23:18:09 +02:00
node_md_blocks_synced{device="md10"} 3.14159265e+08
2019-07-01 11:56:06 +02:00
node_md_blocks_synced{device="md101"} 322560
node_md_blocks_synced{device="md11"} 0
2016-11-14 14:30:33 +01:00
node_md_blocks_synced{device="md12"} 3.886394368e+09
2018-07-02 12:38:20 +02:00
node_md_blocks_synced{device="md120"} 2.095104e+06
2017-07-20 17:04:33 +02:00
node_md_blocks_synced{device="md126"} 1.855870976e+09
2015-09-26 20:54:49 +02:00
node_md_blocks_synced{device="md127"} 3.12319552e+08
2016-09-20 19:39:05 +02:00
node_md_blocks_synced{device="md219"} 7932
2015-09-26 20:54:49 +02:00
node_md_blocks_synced{device="md3"} 5.853468288e+09
node_md_blocks_synced{device="md4"} 4.883648e+06
node_md_blocks_synced{device="md6"} 1.6775552e+07
node_md_blocks_synced{device="md7"} 7.813735424e+09
node_md_blocks_synced{device="md8"} 1.6775552e+07
2019-07-01 11:56:06 +02:00
node_md_blocks_synced{device="md9"} 0
# HELP node_md_disks Number of active/failed/spare disks of device.
2015-09-26 20:54:49 +02:00
# TYPE node_md_disks gauge
2019-07-01 11:56:06 +02:00
node_md_disks{device="md0",state="active"} 2
node_md_disks{device="md0",state="failed"} 0
node_md_disks{device="md0",state="spare"} 0
node_md_disks{device="md00",state="active"} 1
node_md_disks{device="md00",state="failed"} 0
node_md_disks{device="md00",state="spare"} 0
node_md_disks{device="md10",state="active"} 2
node_md_disks{device="md10",state="failed"} 0
node_md_disks{device="md10",state="spare"} 0
node_md_disks{device="md101",state="active"} 3
node_md_disks{device="md101",state="failed"} 0
node_md_disks{device="md101",state="spare"} 0
node_md_disks{device="md11",state="active"} 2
node_md_disks{device="md11",state="failed"} 1
node_md_disks{device="md11",state="spare"} 2
node_md_disks{device="md12",state="active"} 2
node_md_disks{device="md12",state="failed"} 0
node_md_disks{device="md12",state="spare"} 0
node_md_disks{device="md120",state="active"} 2
node_md_disks{device="md120",state="failed"} 0
node_md_disks{device="md120",state="spare"} 0
node_md_disks{device="md126",state="active"} 2
node_md_disks{device="md126",state="failed"} 0
node_md_disks{device="md126",state="spare"} 0
node_md_disks{device="md127",state="active"} 2
node_md_disks{device="md127",state="failed"} 0
node_md_disks{device="md127",state="spare"} 0
node_md_disks{device="md219",state="active"} 0
node_md_disks{device="md219",state="failed"} 0
node_md_disks{device="md219",state="spare"} 3
node_md_disks{device="md3",state="active"} 8
node_md_disks{device="md3",state="failed"} 0
node_md_disks{device="md3",state="spare"} 2
node_md_disks{device="md4",state="active"} 0
node_md_disks{device="md4",state="failed"} 1
node_md_disks{device="md4",state="spare"} 1
node_md_disks{device="md6",state="active"} 1
node_md_disks{device="md6",state="failed"} 1
node_md_disks{device="md6",state="spare"} 1
node_md_disks{device="md7",state="active"} 3
node_md_disks{device="md7",state="failed"} 1
node_md_disks{device="md7",state="spare"} 0
node_md_disks{device="md8",state="active"} 2
node_md_disks{device="md8",state="failed"} 0
node_md_disks{device="md8",state="spare"} 2
node_md_disks{device="md9",state="active"} 4
node_md_disks{device="md9",state="failed"} 2
node_md_disks{device="md9",state="spare"} 1
# HELP node_md_disks_required Total number of disks of device.
# TYPE node_md_disks_required gauge
node_md_disks_required{device="md0"} 2
node_md_disks_required{device="md00"} 1
node_md_disks_required{device="md10"} 2
node_md_disks_required{device="md101"} 3
node_md_disks_required{device="md11"} 2
node_md_disks_required{device="md12"} 2
node_md_disks_required{device="md120"} 2
node_md_disks_required{device="md126"} 2
node_md_disks_required{device="md127"} 2
node_md_disks_required{device="md219"} 0
node_md_disks_required{device="md3"} 8
node_md_disks_required{device="md4"} 0
node_md_disks_required{device="md6"} 2
node_md_disks_required{device="md7"} 4
node_md_disks_required{device="md8"} 2
node_md_disks_required{device="md9"} 4
# HELP node_md_state Indicates the state of md-device.
# TYPE node_md_state gauge
node_md_state{device="md0",state="active"} 1
node_md_state{device="md0",state="inactive"} 0
node_md_state{device="md0",state="recovering"} 0
node_md_state{device="md0",state="resync"} 0
node_md_state{device="md00",state="active"} 1
node_md_state{device="md00",state="inactive"} 0
node_md_state{device="md00",state="recovering"} 0
node_md_state{device="md00",state="resync"} 0
node_md_state{device="md10",state="active"} 1
node_md_state{device="md10",state="inactive"} 0
node_md_state{device="md10",state="recovering"} 0
node_md_state{device="md10",state="resync"} 0
node_md_state{device="md101",state="active"} 1
node_md_state{device="md101",state="inactive"} 0
node_md_state{device="md101",state="recovering"} 0
node_md_state{device="md101",state="resync"} 0
node_md_state{device="md11",state="active"} 0
node_md_state{device="md11",state="inactive"} 0
node_md_state{device="md11",state="recovering"} 0
node_md_state{device="md11",state="resync"} 1
node_md_state{device="md12",state="active"} 1
node_md_state{device="md12",state="inactive"} 0
node_md_state{device="md12",state="recovering"} 0
node_md_state{device="md12",state="resync"} 0
node_md_state{device="md120",state="active"} 1
node_md_state{device="md120",state="inactive"} 0
node_md_state{device="md120",state="recovering"} 0
node_md_state{device="md120",state="resync"} 0
node_md_state{device="md126",state="active"} 1
node_md_state{device="md126",state="inactive"} 0
node_md_state{device="md126",state="recovering"} 0
node_md_state{device="md126",state="resync"} 0
node_md_state{device="md127",state="active"} 1
node_md_state{device="md127",state="inactive"} 0
node_md_state{device="md127",state="recovering"} 0
node_md_state{device="md127",state="resync"} 0
node_md_state{device="md219",state="active"} 0
node_md_state{device="md219",state="inactive"} 1
node_md_state{device="md219",state="recovering"} 0
node_md_state{device="md219",state="resync"} 0
node_md_state{device="md3",state="active"} 1
node_md_state{device="md3",state="inactive"} 0
node_md_state{device="md3",state="recovering"} 0
node_md_state{device="md3",state="resync"} 0
node_md_state{device="md4",state="active"} 0
node_md_state{device="md4",state="inactive"} 1
node_md_state{device="md4",state="recovering"} 0
node_md_state{device="md4",state="resync"} 0
node_md_state{device="md6",state="active"} 0
node_md_state{device="md6",state="inactive"} 0
node_md_state{device="md6",state="recovering"} 1
node_md_state{device="md6",state="resync"} 0
node_md_state{device="md7",state="active"} 1
node_md_state{device="md7",state="inactive"} 0
node_md_state{device="md7",state="recovering"} 0
node_md_state{device="md7",state="resync"} 0
node_md_state{device="md8",state="active"} 0
node_md_state{device="md8",state="inactive"} 0
node_md_state{device="md8",state="recovering"} 0
node_md_state{device="md8",state="resync"} 1
node_md_state{device="md9",state="active"} 0
node_md_state{device="md9",state="inactive"} 0
node_md_state{device="md9",state="recovering"} 0
node_md_state{device="md9",state="resync"} 1
2018-01-17 17:55:55 +01:00
# HELP node_memory_Active_anon_bytes Memory information field Active_anon_bytes.
# TYPE node_memory_Active_anon_bytes gauge
node_memory_Active_anon_bytes 2.068484096e+09
# HELP node_memory_Active_bytes Memory information field Active_bytes.
# TYPE node_memory_Active_bytes gauge
node_memory_Active_bytes 2.287017984e+09
# HELP node_memory_Active_file_bytes Memory information field Active_file_bytes.
# TYPE node_memory_Active_file_bytes gauge
node_memory_Active_file_bytes 2.18533888e+08
# HELP node_memory_AnonHugePages_bytes Memory information field AnonHugePages_bytes.
# TYPE node_memory_AnonHugePages_bytes gauge
node_memory_AnonHugePages_bytes 0
# HELP node_memory_AnonPages_bytes Memory information field AnonPages_bytes.
# TYPE node_memory_AnonPages_bytes gauge
node_memory_AnonPages_bytes 2.298032128e+09
# HELP node_memory_Bounce_bytes Memory information field Bounce_bytes.
# TYPE node_memory_Bounce_bytes gauge
node_memory_Bounce_bytes 0
# HELP node_memory_Buffers_bytes Memory information field Buffers_bytes.
# TYPE node_memory_Buffers_bytes gauge
node_memory_Buffers_bytes 2.256896e+07
# HELP node_memory_Cached_bytes Memory information field Cached_bytes.
# TYPE node_memory_Cached_bytes gauge
node_memory_Cached_bytes 9.53229312e+08
# HELP node_memory_CommitLimit_bytes Memory information field CommitLimit_bytes.
# TYPE node_memory_CommitLimit_bytes gauge
node_memory_CommitLimit_bytes 6.210940928e+09
# HELP node_memory_Committed_AS_bytes Memory information field Committed_AS_bytes.
# TYPE node_memory_Committed_AS_bytes gauge
node_memory_Committed_AS_bytes 8.023486464e+09
# HELP node_memory_DirectMap2M_bytes Memory information field DirectMap2M_bytes.
# TYPE node_memory_DirectMap2M_bytes gauge
node_memory_DirectMap2M_bytes 3.787456512e+09
# HELP node_memory_DirectMap4k_bytes Memory information field DirectMap4k_bytes.
# TYPE node_memory_DirectMap4k_bytes gauge
node_memory_DirectMap4k_bytes 1.9011584e+08
# HELP node_memory_Dirty_bytes Memory information field Dirty_bytes.
# TYPE node_memory_Dirty_bytes gauge
node_memory_Dirty_bytes 1.077248e+06
# HELP node_memory_HardwareCorrupted_bytes Memory information field HardwareCorrupted_bytes.
# TYPE node_memory_HardwareCorrupted_bytes gauge
node_memory_HardwareCorrupted_bytes 0
2015-09-26 20:54:49 +02:00
# HELP node_memory_HugePages_Free Memory information field HugePages_Free.
# TYPE node_memory_HugePages_Free gauge
node_memory_HugePages_Free 0
# HELP node_memory_HugePages_Rsvd Memory information field HugePages_Rsvd.
# TYPE node_memory_HugePages_Rsvd gauge
node_memory_HugePages_Rsvd 0
# HELP node_memory_HugePages_Surp Memory information field HugePages_Surp.
# TYPE node_memory_HugePages_Surp gauge
node_memory_HugePages_Surp 0
# HELP node_memory_HugePages_Total Memory information field HugePages_Total.
# TYPE node_memory_HugePages_Total gauge
node_memory_HugePages_Total 0
2018-01-17 17:55:55 +01:00
# HELP node_memory_Hugepagesize_bytes Memory information field Hugepagesize_bytes.
# TYPE node_memory_Hugepagesize_bytes gauge
node_memory_Hugepagesize_bytes 2.097152e+06
# HELP node_memory_Inactive_anon_bytes Memory information field Inactive_anon_bytes.
# TYPE node_memory_Inactive_anon_bytes gauge
node_memory_Inactive_anon_bytes 9.04245248e+08
# HELP node_memory_Inactive_bytes Memory information field Inactive_bytes.
# TYPE node_memory_Inactive_bytes gauge
node_memory_Inactive_bytes 1.053417472e+09
# HELP node_memory_Inactive_file_bytes Memory information field Inactive_file_bytes.
# TYPE node_memory_Inactive_file_bytes gauge
node_memory_Inactive_file_bytes 1.49172224e+08
# HELP node_memory_KernelStack_bytes Memory information field KernelStack_bytes.
# TYPE node_memory_KernelStack_bytes gauge
node_memory_KernelStack_bytes 5.9392e+06
# HELP node_memory_Mapped_bytes Memory information field Mapped_bytes.
# TYPE node_memory_Mapped_bytes gauge
node_memory_Mapped_bytes 2.4496128e+08
# HELP node_memory_MemFree_bytes Memory information field MemFree_bytes.
# TYPE node_memory_MemFree_bytes gauge
node_memory_MemFree_bytes 2.30883328e+08
# HELP node_memory_MemTotal_bytes Memory information field MemTotal_bytes.
# TYPE node_memory_MemTotal_bytes gauge
node_memory_MemTotal_bytes 3.831959552e+09
# HELP node_memory_Mlocked_bytes Memory information field Mlocked_bytes.
# TYPE node_memory_Mlocked_bytes gauge
node_memory_Mlocked_bytes 32768
# HELP node_memory_NFS_Unstable_bytes Memory information field NFS_Unstable_bytes.
# TYPE node_memory_NFS_Unstable_bytes gauge
node_memory_NFS_Unstable_bytes 0
# HELP node_memory_PageTables_bytes Memory information field PageTables_bytes.
# TYPE node_memory_PageTables_bytes gauge
node_memory_PageTables_bytes 7.7017088e+07
# HELP node_memory_SReclaimable_bytes Memory information field SReclaimable_bytes.
# TYPE node_memory_SReclaimable_bytes gauge
node_memory_SReclaimable_bytes 4.5846528e+07
# HELP node_memory_SUnreclaim_bytes Memory information field SUnreclaim_bytes.
# TYPE node_memory_SUnreclaim_bytes gauge
node_memory_SUnreclaim_bytes 5.545984e+07
# HELP node_memory_Shmem_bytes Memory information field Shmem_bytes.
# TYPE node_memory_Shmem_bytes gauge
node_memory_Shmem_bytes 6.0809216e+08
# HELP node_memory_Slab_bytes Memory information field Slab_bytes.
# TYPE node_memory_Slab_bytes gauge
node_memory_Slab_bytes 1.01306368e+08
# HELP node_memory_SwapCached_bytes Memory information field SwapCached_bytes.
# TYPE node_memory_SwapCached_bytes gauge
node_memory_SwapCached_bytes 1.97124096e+08
# HELP node_memory_SwapFree_bytes Memory information field SwapFree_bytes.
# TYPE node_memory_SwapFree_bytes gauge
node_memory_SwapFree_bytes 3.23108864e+09
# HELP node_memory_SwapTotal_bytes Memory information field SwapTotal_bytes.
# TYPE node_memory_SwapTotal_bytes gauge
node_memory_SwapTotal_bytes 4.2949632e+09
# HELP node_memory_Unevictable_bytes Memory information field Unevictable_bytes.
# TYPE node_memory_Unevictable_bytes gauge
node_memory_Unevictable_bytes 32768
# HELP node_memory_VmallocChunk_bytes Memory information field VmallocChunk_bytes.
# TYPE node_memory_VmallocChunk_bytes gauge
node_memory_VmallocChunk_bytes 3.5183963009024e+13
# HELP node_memory_VmallocTotal_bytes Memory information field VmallocTotal_bytes.
# TYPE node_memory_VmallocTotal_bytes gauge
node_memory_VmallocTotal_bytes 3.5184372087808e+13
# HELP node_memory_VmallocUsed_bytes Memory information field VmallocUsed_bytes.
# TYPE node_memory_VmallocUsed_bytes gauge
node_memory_VmallocUsed_bytes 3.6130816e+08
# HELP node_memory_WritebackTmp_bytes Memory information field WritebackTmp_bytes.
# TYPE node_memory_WritebackTmp_bytes gauge
node_memory_WritebackTmp_bytes 0
# HELP node_memory_Writeback_bytes Memory information field Writeback_bytes.
# TYPE node_memory_Writeback_bytes gauge
node_memory_Writeback_bytes 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Active Memory information field Active.
# TYPE node_memory_numa_Active gauge
node_memory_numa_Active{node="0"} 5.58733312e+09
node_memory_numa_Active{node="1"} 5.739003904e+09
2018-04-09 18:01:52 +02:00
node_memory_numa_Active{node="2"} 5.739003904e+09
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Active_anon Memory information field Active_anon.
# TYPE node_memory_numa_Active_anon gauge
node_memory_numa_Active_anon{node="0"} 7.07915776e+08
node_memory_numa_Active_anon{node="1"} 6.04635136e+08
2018-04-09 18:01:52 +02:00
node_memory_numa_Active_anon{node="2"} 6.04635136e+08
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Active_file Memory information field Active_file.
# TYPE node_memory_numa_Active_file gauge
node_memory_numa_Active_file{node="0"} 4.879417344e+09
node_memory_numa_Active_file{node="1"} 5.134368768e+09
2018-04-09 18:01:52 +02:00
node_memory_numa_Active_file{node="2"} 5.134368768e+09
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_AnonHugePages Memory information field AnonHugePages.
# TYPE node_memory_numa_AnonHugePages gauge
node_memory_numa_AnonHugePages{node="0"} 1.50994944e+08
node_memory_numa_AnonHugePages{node="1"} 9.2274688e+07
2018-04-09 18:01:52 +02:00
node_memory_numa_AnonHugePages{node="2"} 9.2274688e+07
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_AnonPages Memory information field AnonPages.
# TYPE node_memory_numa_AnonPages gauge
node_memory_numa_AnonPages{node="0"} 8.07112704e+08
node_memory_numa_AnonPages{node="1"} 6.88058368e+08
2018-04-09 18:01:52 +02:00
node_memory_numa_AnonPages{node="2"} 6.88058368e+08
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Bounce Memory information field Bounce.
# TYPE node_memory_numa_Bounce gauge
node_memory_numa_Bounce{node="0"} 0
node_memory_numa_Bounce{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_Bounce{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Dirty Memory information field Dirty.
# TYPE node_memory_numa_Dirty gauge
node_memory_numa_Dirty{node="0"} 20480
node_memory_numa_Dirty{node="1"} 122880
2018-04-09 18:01:52 +02:00
node_memory_numa_Dirty{node="2"} 122880
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_FilePages Memory information field FilePages.
# TYPE node_memory_numa_FilePages gauge
node_memory_numa_FilePages{node="0"} 7.1855017984e+10
node_memory_numa_FilePages{node="1"} 8.5585088512e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_FilePages{node="2"} 8.5585088512e+10
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_HugePages_Free Memory information field HugePages_Free.
# TYPE node_memory_numa_HugePages_Free gauge
node_memory_numa_HugePages_Free{node="0"} 0
node_memory_numa_HugePages_Free{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_HugePages_Free{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_HugePages_Surp Memory information field HugePages_Surp.
# TYPE node_memory_numa_HugePages_Surp gauge
node_memory_numa_HugePages_Surp{node="0"} 0
node_memory_numa_HugePages_Surp{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_HugePages_Surp{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_HugePages_Total Memory information field HugePages_Total.
# TYPE node_memory_numa_HugePages_Total gauge
node_memory_numa_HugePages_Total{node="0"} 0
node_memory_numa_HugePages_Total{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_HugePages_Total{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Inactive Memory information field Inactive.
# TYPE node_memory_numa_Inactive gauge
node_memory_numa_Inactive{node="0"} 6.0569788416e+10
node_memory_numa_Inactive{node="1"} 7.3165406208e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_Inactive{node="2"} 7.3165406208e+10
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Inactive_anon Memory information field Inactive_anon.
# TYPE node_memory_numa_Inactive_anon gauge
node_memory_numa_Inactive_anon{node="0"} 3.48626944e+08
node_memory_numa_Inactive_anon{node="1"} 2.91930112e+08
2018-04-09 18:01:52 +02:00
node_memory_numa_Inactive_anon{node="2"} 2.91930112e+08
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Inactive_file Memory information field Inactive_file.
# TYPE node_memory_numa_Inactive_file gauge
node_memory_numa_Inactive_file{node="0"} 6.0221161472e+10
node_memory_numa_Inactive_file{node="1"} 7.2873476096e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_Inactive_file{node="2"} 7.2873476096e+10
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_KernelStack Memory information field KernelStack.
# TYPE node_memory_numa_KernelStack gauge
node_memory_numa_KernelStack{node="0"} 3.4832384e+07
node_memory_numa_KernelStack{node="1"} 3.1850496e+07
2018-04-09 18:01:52 +02:00
node_memory_numa_KernelStack{node="2"} 3.1850496e+07
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Mapped Memory information field Mapped.
# TYPE node_memory_numa_Mapped gauge
node_memory_numa_Mapped{node="0"} 9.1570176e+08
node_memory_numa_Mapped{node="1"} 8.84850688e+08
2018-04-09 18:01:52 +02:00
node_memory_numa_Mapped{node="2"} 8.84850688e+08
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_MemFree Memory information field MemFree.
# TYPE node_memory_numa_MemFree gauge
node_memory_numa_MemFree{node="0"} 5.4303100928e+10
node_memory_numa_MemFree{node="1"} 4.0586022912e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_MemFree{node="2"} 4.0586022912e+10
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_MemTotal Memory information field MemTotal.
# TYPE node_memory_numa_MemTotal gauge
node_memory_numa_MemTotal{node="0"} 1.3740271616e+11
node_memory_numa_MemTotal{node="1"} 1.37438953472e+11
2018-04-09 18:01:52 +02:00
node_memory_numa_MemTotal{node="2"} 1.37438953472e+11
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_MemUsed Memory information field MemUsed.
# TYPE node_memory_numa_MemUsed gauge
node_memory_numa_MemUsed{node="0"} 8.3099615232e+10
node_memory_numa_MemUsed{node="1"} 9.685293056e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_MemUsed{node="2"} 9.685293056e+10
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Mlocked Memory information field Mlocked.
# TYPE node_memory_numa_Mlocked gauge
node_memory_numa_Mlocked{node="0"} 0
node_memory_numa_Mlocked{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_Mlocked{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_NFS_Unstable Memory information field NFS_Unstable.
# TYPE node_memory_numa_NFS_Unstable gauge
node_memory_numa_NFS_Unstable{node="0"} 0
node_memory_numa_NFS_Unstable{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_NFS_Unstable{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_PageTables Memory information field PageTables.
# TYPE node_memory_numa_PageTables gauge
node_memory_numa_PageTables{node="0"} 1.46743296e+08
node_memory_numa_PageTables{node="1"} 1.27254528e+08
2018-04-09 18:01:52 +02:00
node_memory_numa_PageTables{node="2"} 1.27254528e+08
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_SReclaimable Memory information field SReclaimable.
# TYPE node_memory_numa_SReclaimable gauge
node_memory_numa_SReclaimable{node="0"} 4.580478976e+09
node_memory_numa_SReclaimable{node="1"} 4.724822016e+09
2018-04-09 18:01:52 +02:00
node_memory_numa_SReclaimable{node="2"} 4.724822016e+09
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_SUnreclaim Memory information field SUnreclaim.
# TYPE node_memory_numa_SUnreclaim gauge
node_memory_numa_SUnreclaim{node="0"} 2.23352832e+09
node_memory_numa_SUnreclaim{node="1"} 2.464391168e+09
2018-04-09 18:01:52 +02:00
node_memory_numa_SUnreclaim{node="2"} 2.464391168e+09
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Shmem Memory information field Shmem.
# TYPE node_memory_numa_Shmem gauge
node_memory_numa_Shmem{node="0"} 4.900864e+07
node_memory_numa_Shmem{node="1"} 8.968192e+07
2018-04-09 18:01:52 +02:00
node_memory_numa_Shmem{node="2"} 8.968192e+07
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Slab Memory information field Slab.
# TYPE node_memory_numa_Slab gauge
node_memory_numa_Slab{node="0"} 6.814007296e+09
node_memory_numa_Slab{node="1"} 7.189213184e+09
2018-04-09 18:01:52 +02:00
node_memory_numa_Slab{node="2"} 7.189213184e+09
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Unevictable Memory information field Unevictable.
# TYPE node_memory_numa_Unevictable gauge
node_memory_numa_Unevictable{node="0"} 0
node_memory_numa_Unevictable{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_Unevictable{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_Writeback Memory information field Writeback.
# TYPE node_memory_numa_Writeback gauge
node_memory_numa_Writeback{node="0"} 0
node_memory_numa_Writeback{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_Writeback{node="2"} 0
2015-11-13 10:23:21 +01:00
# HELP node_memory_numa_WritebackTmp Memory information field WritebackTmp.
# TYPE node_memory_numa_WritebackTmp gauge
node_memory_numa_WritebackTmp{node="0"} 0
node_memory_numa_WritebackTmp{node="1"} 0
2018-04-09 18:01:52 +02:00
node_memory_numa_WritebackTmp{node="2"} 0
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_interleave_hit_total Memory information field interleave_hit_total.
# TYPE node_memory_numa_interleave_hit_total counter
node_memory_numa_interleave_hit_total{node="0"} 57146
node_memory_numa_interleave_hit_total{node="1"} 57286
2018-04-09 18:01:52 +02:00
node_memory_numa_interleave_hit_total{node="2"} 7286
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_local_node_total Memory information field local_node_total.
# TYPE node_memory_numa_local_node_total counter
node_memory_numa_local_node_total{node="0"} 1.93454780853e+11
node_memory_numa_local_node_total{node="1"} 3.2671904655e+11
2018-04-09 18:01:52 +02:00
node_memory_numa_local_node_total{node="2"} 2.671904655e+10
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_numa_foreign_total Memory information field numa_foreign_total.
# TYPE node_memory_numa_numa_foreign_total counter
node_memory_numa_numa_foreign_total{node="0"} 5.98586233e+10
node_memory_numa_numa_foreign_total{node="1"} 1.2624528e+07
2018-04-09 18:01:52 +02:00
node_memory_numa_numa_foreign_total{node="2"} 2.624528e+06
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_numa_hit_total Memory information field numa_hit_total.
# TYPE node_memory_numa_numa_hit_total counter
node_memory_numa_numa_hit_total{node="0"} 1.93460335812e+11
node_memory_numa_numa_hit_total{node="1"} 3.26720946761e+11
2018-04-09 18:01:52 +02:00
node_memory_numa_numa_hit_total{node="2"} 2.6720946761e+10
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_numa_miss_total Memory information field numa_miss_total.
# TYPE node_memory_numa_numa_miss_total counter
node_memory_numa_numa_miss_total{node="0"} 1.2624528e+07
node_memory_numa_numa_miss_total{node="1"} 5.9858626709e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_numa_miss_total{node="2"} 9.858626709e+09
2016-10-12 13:07:49 +02:00
# HELP node_memory_numa_other_node_total Memory information field other_node_total.
# TYPE node_memory_numa_other_node_total counter
node_memory_numa_other_node_total{node="0"} 1.8179487e+07
node_memory_numa_other_node_total{node="1"} 5.986052692e+10
2018-04-09 18:01:52 +02:00
node_memory_numa_other_node_total{node="2"} 9.86052692e+09
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_age_seconds_total The age of the NFS mount in seconds.
# TYPE node_mountstats_nfs_age_seconds_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_age_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 13968
node_mountstats_nfs_age_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 13968
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_direct_read_bytes_total Number of bytes read using the read() syscall in O_DIRECT mode.
# TYPE node_mountstats_nfs_direct_read_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_direct_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_direct_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_direct_write_bytes_total Number of bytes written using the write() syscall in O_DIRECT mode.
# TYPE node_mountstats_nfs_direct_write_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_direct_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_direct_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_attribute_invalidate_total Number of times cached inode attributes are invalidated.
# TYPE node_mountstats_nfs_event_attribute_invalidate_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_attribute_invalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_attribute_invalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_data_invalidate_total Number of times an inode cache is cleared.
# TYPE node_mountstats_nfs_event_data_invalidate_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_data_invalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_data_invalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_dnode_revalidate_total Number of times cached dentry nodes are re-validated from the server.
# TYPE node_mountstats_nfs_event_dnode_revalidate_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_dnode_revalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 226
node_mountstats_nfs_event_dnode_revalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 226
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_inode_revalidate_total Number of times cached inode attributes are re-validated from the server.
# TYPE node_mountstats_nfs_event_inode_revalidate_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_inode_revalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 52
node_mountstats_nfs_event_inode_revalidate_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 52
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_jukebox_delay_total Number of times the NFS server indicated EJUKEBOX; retrieving data from offline storage.
# TYPE node_mountstats_nfs_event_jukebox_delay_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_jukebox_delay_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_jukebox_delay_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_pnfs_read_total Number of NFS v4.1+ pNFS reads.
# TYPE node_mountstats_nfs_event_pnfs_read_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_pnfs_read_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_pnfs_read_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_pnfs_write_total Number of NFS v4.1+ pNFS writes.
# TYPE node_mountstats_nfs_event_pnfs_write_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_pnfs_write_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_pnfs_write_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_short_read_total Number of times the NFS server gave less data than expected while reading.
# TYPE node_mountstats_nfs_event_short_read_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_short_read_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_short_read_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_short_write_total Number of times the NFS server wrote less data than expected while writing.
# TYPE node_mountstats_nfs_event_short_write_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_short_write_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_short_write_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_silly_rename_total Number of times a file was removed while still open by another process.
# TYPE node_mountstats_nfs_event_silly_rename_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_silly_rename_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_silly_rename_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_truncation_total Number of times files have been truncated.
# TYPE node_mountstats_nfs_event_truncation_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_truncation_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_truncation_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_access_total Number of times permissions have been checked.
# TYPE node_mountstats_nfs_event_vfs_access_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_access_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 398
node_mountstats_nfs_event_vfs_access_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 398
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_file_release_total Number of times files have been closed and released.
# TYPE node_mountstats_nfs_event_vfs_file_release_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_file_release_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 77
node_mountstats_nfs_event_vfs_file_release_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 77
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_flush_total Number of pending writes that have been forcefully flushed to the server.
# TYPE node_mountstats_nfs_event_vfs_flush_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_flush_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 77
node_mountstats_nfs_event_vfs_flush_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 77
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_fsync_total Number of times fsync() has been called on directories and files.
# TYPE node_mountstats_nfs_event_vfs_fsync_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_fsync_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_fsync_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_getdents_total Number of times directory entries have been read with getdents().
# TYPE node_mountstats_nfs_event_vfs_getdents_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_getdents_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_getdents_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-02-28 21:02:43 +01:00
# HELP node_mountstats_nfs_event_vfs_lock_total Number of times locking has been attempted on a file.
2017-01-11 17:41:13 +01:00
# TYPE node_mountstats_nfs_event_vfs_lock_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_lock_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_lock_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_lookup_total Number of times a directory lookup has occurred.
# TYPE node_mountstats_nfs_event_vfs_lookup_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_lookup_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 13
node_mountstats_nfs_event_vfs_lookup_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 13
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_open_total Number of times cached inode attributes are invalidated.
# TYPE node_mountstats_nfs_event_vfs_open_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_open_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 1
node_mountstats_nfs_event_vfs_open_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 1
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_read_page_total Number of pages read directly via mmap()'d files.
# TYPE node_mountstats_nfs_event_vfs_read_page_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_read_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_read_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_read_pages_total Number of times a group of pages have been read.
# TYPE node_mountstats_nfs_event_vfs_read_pages_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_read_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 331
node_mountstats_nfs_event_vfs_read_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 331
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_setattr_total Number of times directory entries have been read with getdents().
# TYPE node_mountstats_nfs_event_vfs_setattr_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_setattr_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_setattr_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_update_page_total Number of updates (and potential writes) to pages.
# TYPE node_mountstats_nfs_event_vfs_update_page_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_update_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_update_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_write_page_total Number of pages written directly via mmap()'d files.
# TYPE node_mountstats_nfs_event_vfs_write_page_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_write_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_vfs_write_page_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_vfs_write_pages_total Number of times a group of pages have been written.
# TYPE node_mountstats_nfs_event_vfs_write_pages_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_vfs_write_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 47
node_mountstats_nfs_event_vfs_write_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 47
2017-01-11 17:41:13 +01:00
# HELP node_mountstats_nfs_event_write_extension_total Number of times a file has been grown due to writes beyond its existing end.
# TYPE node_mountstats_nfs_event_write_extension_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_event_write_extension_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_event_write_extension_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_major_timeouts_total Number of times a request has had a major timeout for a given operation.
# TYPE node_mountstats_nfs_operations_major_timeouts_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_major_timeouts_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_queue_time_seconds_total Duration all requests spent queued for transmission for a given operation before they were sent, in seconds.
# TYPE node_mountstats_nfs_operations_queue_time_seconds_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 9.007044786793922e+12
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 0.006
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 0.006
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_queue_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_received_bytes_total Number of bytes received for a given operation, including RPC headers and payload.
# TYPE node_mountstats_nfs_operations_received_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 3.62996810236e+11
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 1.210292152e+09
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 1.210292152e+09
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_received_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_request_time_seconds_total Duration all requests took from when a request was enqueued to when it was completely handled for a given operation, in seconds.
# TYPE node_mountstats_nfs_operations_request_time_seconds_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 1.953587717e+06
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 79.407
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 79.407
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_request_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_requests_total Number of requests performed for a given operation.
# TYPE node_mountstats_nfs_operations_requests_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 2.927395007e+09
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 1298
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 1298
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_requests_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_response_time_seconds_total Duration all requests took to get a reply back after a request for a given operation was transmitted, in seconds.
# TYPE node_mountstats_nfs_operations_response_time_seconds_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 1.667369447e+06
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 79.386
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 79.386
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_response_time_seconds_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_sent_bytes_total Number of bytes sent for a given operation, including RPC headers and payload.
# TYPE node_mountstats_nfs_operations_sent_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 5.26931094212e+11
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 207680
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 207680
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_sent_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_operations_transmissions_total Number of times an actual RPC request has been transmitted for a given operation.
# TYPE node_mountstats_nfs_operations_transmissions_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="ACCESS",protocol="udp"} 2.927394995e+09
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="tcp"} 0
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="NULL",protocol="udp"} 0
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="tcp"} 1298
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="READ",protocol="udp"} 1298
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="tcp"} 0
node_mountstats_nfs_operations_transmissions_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",operation="WRITE",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_read_bytes_total Number of bytes read using the read() syscall.
# TYPE node_mountstats_nfs_read_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 1.20764023e+09
node_mountstats_nfs_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 1.20764023e+09
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_read_pages_total Number of pages read directly via mmap()'d files.
# TYPE node_mountstats_nfs_read_pages_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_read_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 295483
node_mountstats_nfs_read_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 295483
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_total_read_bytes_total Number of bytes read from the NFS server, in total.
# TYPE node_mountstats_nfs_total_read_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_total_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 1.210214218e+09
node_mountstats_nfs_total_read_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 1.210214218e+09
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_total_write_bytes_total Number of bytes written to the NFS server, in total.
# TYPE node_mountstats_nfs_total_write_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_total_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_total_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_backlog_queue_total Total number of items added to the RPC backlog queue.
# TYPE node_mountstats_nfs_transport_backlog_queue_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_backlog_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_transport_backlog_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_bad_transaction_ids_total Number of times the NFS server sent a response with a transaction ID unknown to this client.
# TYPE node_mountstats_nfs_transport_bad_transaction_ids_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_bad_transaction_ids_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_transport_bad_transaction_ids_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_bind_total Number of times the client has had to establish a connection from scratch to the NFS server.
# TYPE node_mountstats_nfs_transport_bind_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_bind_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_transport_bind_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_connect_total Number of times the client has made a TCP connection to the NFS server.
# TYPE node_mountstats_nfs_transport_connect_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_connect_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 1
node_mountstats_nfs_transport_connect_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_idle_time_seconds Duration since the NFS mount last saw any RPC traffic, in seconds.
# TYPE node_mountstats_nfs_transport_idle_time_seconds gauge
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_idle_time_seconds{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 11
node_mountstats_nfs_transport_idle_time_seconds{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_maximum_rpc_slots Maximum number of simultaneously active RPC requests ever used.
# TYPE node_mountstats_nfs_transport_maximum_rpc_slots gauge
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_maximum_rpc_slots{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 24
node_mountstats_nfs_transport_maximum_rpc_slots{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 24
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_pending_queue_total Total number of items added to the RPC transmission pending queue.
# TYPE node_mountstats_nfs_transport_pending_queue_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_pending_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 5726
node_mountstats_nfs_transport_pending_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 5726
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_receives_total Number of RPC responses for this mount received from the NFS server.
# TYPE node_mountstats_nfs_transport_receives_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_receives_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 6428
node_mountstats_nfs_transport_receives_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 6428
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_sending_queue_total Total number of items added to the RPC transmission sending queue.
# TYPE node_mountstats_nfs_transport_sending_queue_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_sending_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 26
node_mountstats_nfs_transport_sending_queue_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 26
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_transport_sends_total Number of RPC requests for this mount sent to the NFS server.
# TYPE node_mountstats_nfs_transport_sends_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_transport_sends_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 6428
node_mountstats_nfs_transport_sends_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 6428
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_write_bytes_total Number of bytes written using the write() syscall.
# TYPE node_mountstats_nfs_write_bytes_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_write_bytes_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2016-12-12 22:46:45 +01:00
# HELP node_mountstats_nfs_write_pages_total Number of pages written directly via mmap()'d files.
# TYPE node_mountstats_nfs_write_pages_total counter
2019-07-28 11:32:40 +02:00
node_mountstats_nfs_write_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="tcp"} 0
node_mountstats_nfs_write_pages_total{export="192.168.1.1:/srv/test",mountaddr="192.168.1.1",protocol="udp"} 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Icmp6_InErrors Statistic Icmp6InErrors.
# TYPE node_netstat_Icmp6_InErrors untyped
node_netstat_Icmp6_InErrors 0
# HELP node_netstat_Icmp6_InMsgs Statistic Icmp6InMsgs.
# TYPE node_netstat_Icmp6_InMsgs untyped
node_netstat_Icmp6_InMsgs 0
# HELP node_netstat_Icmp6_OutMsgs Statistic Icmp6OutMsgs.
# TYPE node_netstat_Icmp6_OutMsgs untyped
node_netstat_Icmp6_OutMsgs 8
# HELP node_netstat_Icmp_InErrors Statistic IcmpInErrors.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Icmp_InErrors untyped
2015-09-26 20:54:49 +02:00
node_netstat_Icmp_InErrors 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Icmp_InMsgs Statistic IcmpInMsgs.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Icmp_InMsgs untyped
2015-09-26 20:54:49 +02:00
node_netstat_Icmp_InMsgs 104
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Icmp_OutMsgs Statistic IcmpOutMsgs.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Icmp_OutMsgs untyped
2015-09-26 20:54:49 +02:00
node_netstat_Icmp_OutMsgs 120
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Ip6_InOctets Statistic Ip6InOctets.
# TYPE node_netstat_Ip6_InOctets untyped
node_netstat_Ip6_InOctets 460
# HELP node_netstat_Ip6_OutOctets Statistic Ip6OutOctets.
# TYPE node_netstat_Ip6_OutOctets untyped
node_netstat_Ip6_OutOctets 536
# HELP node_netstat_IpExt_InOctets Statistic IpExtInOctets.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_IpExt_InOctets untyped
2015-09-26 20:54:49 +02:00
node_netstat_IpExt_InOctets 6.28639697e+09
2017-07-08 20:16:35 +02:00
# HELP node_netstat_IpExt_OutOctets Statistic IpExtOutOctets.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_IpExt_OutOctets untyped
2015-09-26 20:54:49 +02:00
node_netstat_IpExt_OutOctets 2.786264347e+09
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Ip_Forwarding Statistic IpForwarding.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Ip_Forwarding untyped
2015-09-26 20:54:49 +02:00
node_netstat_Ip_Forwarding 1
2017-07-08 20:16:35 +02:00
# HELP node_netstat_TcpExt_ListenDrops Statistic TcpExtListenDrops.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_TcpExt_ListenDrops untyped
2015-09-26 20:54:49 +02:00
node_netstat_TcpExt_ListenDrops 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_TcpExt_ListenOverflows Statistic TcpExtListenOverflows.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_TcpExt_ListenOverflows untyped
2015-09-26 20:54:49 +02:00
node_netstat_TcpExt_ListenOverflows 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_TcpExt_SyncookiesFailed Statistic TcpExtSyncookiesFailed.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_TcpExt_SyncookiesFailed untyped
2015-09-26 20:54:49 +02:00
node_netstat_TcpExt_SyncookiesFailed 2
2017-07-08 20:16:35 +02:00
# HELP node_netstat_TcpExt_SyncookiesRecv Statistic TcpExtSyncookiesRecv.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_TcpExt_SyncookiesRecv untyped
2015-09-26 20:54:49 +02:00
node_netstat_TcpExt_SyncookiesRecv 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_TcpExt_SyncookiesSent Statistic TcpExtSyncookiesSent.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_TcpExt_SyncookiesSent untyped
2015-09-26 20:54:49 +02:00
node_netstat_TcpExt_SyncookiesSent 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Tcp_ActiveOpens Statistic TcpActiveOpens.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Tcp_ActiveOpens untyped
2015-09-26 20:54:49 +02:00
node_netstat_Tcp_ActiveOpens 3556
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Tcp_CurrEstab Statistic TcpCurrEstab.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Tcp_CurrEstab untyped
2015-09-26 20:54:49 +02:00
node_netstat_Tcp_CurrEstab 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Tcp_InErrs Statistic TcpInErrs.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Tcp_InErrs untyped
2015-09-26 20:54:49 +02:00
node_netstat_Tcp_InErrs 5
2018-12-08 12:16:02 +01:00
# HELP node_netstat_Tcp_InSegs Statistic TcpInSegs.
# TYPE node_netstat_Tcp_InSegs untyped
node_netstat_Tcp_InSegs 5.7252008e+07
2020-06-04 08:41:53 +02:00
# HELP node_netstat_Tcp_OutRsts Statistic TcpOutRsts.
# TYPE node_netstat_Tcp_OutRsts untyped
node_netstat_Tcp_OutRsts 1003
2018-12-08 12:16:02 +01:00
# HELP node_netstat_Tcp_OutSegs Statistic TcpOutSegs.
# TYPE node_netstat_Tcp_OutSegs untyped
node_netstat_Tcp_OutSegs 5.4915039e+07
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Tcp_PassiveOpens Statistic TcpPassiveOpens.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Tcp_PassiveOpens untyped
2015-09-26 20:54:49 +02:00
node_netstat_Tcp_PassiveOpens 230
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Tcp_RetransSegs Statistic TcpRetransSegs.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Tcp_RetransSegs untyped
2015-09-26 20:54:49 +02:00
node_netstat_Tcp_RetransSegs 227
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Udp6_InDatagrams Statistic Udp6InDatagrams.
# TYPE node_netstat_Udp6_InDatagrams untyped
node_netstat_Udp6_InDatagrams 0
# HELP node_netstat_Udp6_InErrors Statistic Udp6InErrors.
# TYPE node_netstat_Udp6_InErrors untyped
node_netstat_Udp6_InErrors 0
# HELP node_netstat_Udp6_NoPorts Statistic Udp6NoPorts.
# TYPE node_netstat_Udp6_NoPorts untyped
node_netstat_Udp6_NoPorts 0
# HELP node_netstat_Udp6_OutDatagrams Statistic Udp6OutDatagrams.
# TYPE node_netstat_Udp6_OutDatagrams untyped
node_netstat_Udp6_OutDatagrams 0
2020-02-19 14:41:40 +01:00
# HELP node_netstat_Udp6_RcvbufErrors Statistic Udp6RcvbufErrors.
# TYPE node_netstat_Udp6_RcvbufErrors untyped
node_netstat_Udp6_RcvbufErrors 9
# HELP node_netstat_Udp6_SndbufErrors Statistic Udp6SndbufErrors.
# TYPE node_netstat_Udp6_SndbufErrors untyped
node_netstat_Udp6_SndbufErrors 8
2017-07-08 20:16:35 +02:00
# HELP node_netstat_UdpLite6_InErrors Statistic UdpLite6InErrors.
# TYPE node_netstat_UdpLite6_InErrors untyped
node_netstat_UdpLite6_InErrors 0
# HELP node_netstat_UdpLite_InErrors Statistic UdpLiteInErrors.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_UdpLite_InErrors untyped
2015-09-26 20:54:49 +02:00
node_netstat_UdpLite_InErrors 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Udp_InDatagrams Statistic UdpInDatagrams.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Udp_InDatagrams untyped
2015-09-26 20:54:49 +02:00
node_netstat_Udp_InDatagrams 88542
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Udp_InErrors Statistic UdpInErrors.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Udp_InErrors untyped
2015-09-26 20:54:49 +02:00
node_netstat_Udp_InErrors 0
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Udp_NoPorts Statistic UdpNoPorts.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Udp_NoPorts untyped
2015-09-26 20:54:49 +02:00
node_netstat_Udp_NoPorts 120
2017-07-08 20:16:35 +02:00
# HELP node_netstat_Udp_OutDatagrams Statistic UdpOutDatagrams.
2016-08-12 01:09:20 +02:00
# TYPE node_netstat_Udp_OutDatagrams untyped
2015-09-26 20:54:49 +02:00
node_netstat_Udp_OutDatagrams 53028
2020-02-19 14:41:40 +01:00
# HELP node_netstat_Udp_RcvbufErrors Statistic UdpRcvbufErrors.
# TYPE node_netstat_Udp_RcvbufErrors untyped
node_netstat_Udp_RcvbufErrors 9
# HELP node_netstat_Udp_SndbufErrors Statistic UdpSndbufErrors.
# TYPE node_netstat_Udp_SndbufErrors untyped
node_netstat_Udp_SndbufErrors 8
2018-07-16 15:08:18 +02:00
# HELP node_network_address_assign_type address_assign_type value of /sys/class/net/<iface>.
# TYPE node_network_address_assign_type gauge
2019-02-06 20:02:48 +01:00
node_network_address_assign_type{device="eth0"} 3
2018-07-16 15:08:18 +02:00
# HELP node_network_carrier carrier value of /sys/class/net/<iface>.
# TYPE node_network_carrier gauge
2019-02-06 20:02:48 +01:00
node_network_carrier{device="eth0"} 1
2018-07-16 15:08:18 +02:00
# HELP node_network_carrier_changes_total carrier_changes_total value of /sys/class/net/<iface>.
# TYPE node_network_carrier_changes_total counter
2019-02-06 20:02:48 +01:00
node_network_carrier_changes_total{device="eth0"} 2
2018-07-16 15:08:18 +02:00
# HELP node_network_carrier_down_changes_total carrier_down_changes_total value of /sys/class/net/<iface>.
# TYPE node_network_carrier_down_changes_total counter
2019-02-06 20:02:48 +01:00
node_network_carrier_down_changes_total{device="eth0"} 1
2018-07-16 15:08:18 +02:00
# HELP node_network_carrier_up_changes_total carrier_up_changes_total value of /sys/class/net/<iface>.
# TYPE node_network_carrier_up_changes_total counter
2019-02-06 20:02:48 +01:00
node_network_carrier_up_changes_total{device="eth0"} 1
2018-07-16 15:08:18 +02:00
# HELP node_network_device_id device_id value of /sys/class/net/<iface>.
# TYPE node_network_device_id gauge
2019-02-06 20:02:48 +01:00
node_network_device_id{device="eth0"} 32
2018-07-16 15:08:18 +02:00
# HELP node_network_dormant dormant value of /sys/class/net/<iface>.
# TYPE node_network_dormant gauge
2019-02-06 20:02:48 +01:00
node_network_dormant{device="eth0"} 1
2018-07-16 15:08:18 +02:00
# HELP node_network_flags flags value of /sys/class/net/<iface>.
# TYPE node_network_flags gauge
2019-02-06 20:02:48 +01:00
node_network_flags{device="eth0"} 4867
2018-07-16 15:08:18 +02:00
# HELP node_network_iface_id iface_id value of /sys/class/net/<iface>.
# TYPE node_network_iface_id gauge
2019-02-06 20:02:48 +01:00
node_network_iface_id{device="eth0"} 2
2018-07-16 15:08:18 +02:00
# HELP node_network_iface_link iface_link value of /sys/class/net/<iface>.
# TYPE node_network_iface_link gauge
2019-02-06 20:02:48 +01:00
node_network_iface_link{device="eth0"} 2
2018-07-16 15:08:18 +02:00
# HELP node_network_iface_link_mode iface_link_mode value of /sys/class/net/<iface>.
# TYPE node_network_iface_link_mode gauge
2019-02-06 20:02:48 +01:00
node_network_iface_link_mode{device="eth0"} 1
2019-02-07 15:59:32 +01:00
# HELP node_network_info Non-numeric data from /sys/class/net/<iface>, value is always 1.
# TYPE node_network_info gauge
node_network_info{address="01:01:01:01:01:01",broadcast="ff:ff:ff:ff:ff:ff",device="eth0",duplex="full",ifalias="",operstate="up"} 1
2018-07-16 15:08:18 +02:00
# HELP node_network_mtu_bytes mtu_bytes value of /sys/class/net/<iface>.
# TYPE node_network_mtu_bytes gauge
2019-02-06 20:02:48 +01:00
node_network_mtu_bytes{device="eth0"} 1500
2018-07-16 15:08:18 +02:00
# HELP node_network_name_assign_type name_assign_type value of /sys/class/net/<iface>.
# TYPE node_network_name_assign_type gauge
2019-02-06 20:02:48 +01:00
node_network_name_assign_type{device="eth0"} 2
2018-07-16 15:08:18 +02:00
# HELP node_network_net_dev_group net_dev_group value of /sys/class/net/<iface>.
# TYPE node_network_net_dev_group gauge
2019-02-06 20:02:48 +01:00
node_network_net_dev_group{device="eth0"} 0
2018-07-16 15:08:18 +02:00
# HELP node_network_protocol_type protocol_type value of /sys/class/net/<iface>.
# TYPE node_network_protocol_type gauge
2019-02-06 20:02:48 +01:00
node_network_protocol_type{device="eth0"} 1
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_bytes_total Network device statistic receive_bytes.
# TYPE node_network_receive_bytes_total counter
node_network_receive_bytes_total{device="docker0"} 6.4910168e+07
node_network_receive_bytes_total{device="eth0"} 6.8210035552e+10
2018-04-18 12:48:27 +02:00
node_network_receive_bytes_total{device="flannel.1"} 1.8144009813e+10
2018-04-16 14:34:39 +02:00
node_network_receive_bytes_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_bytes_total{device="lo"} 4.35303245e+08
node_network_receive_bytes_total{device="lxcbr0"} 0
node_network_receive_bytes_total{device="tun0"} 1888
node_network_receive_bytes_total{device="veth4B09XN"} 648
node_network_receive_bytes_total{device="wlan0"} 1.0437182923e+10
2018-04-18 12:48:27 +02:00
node_network_receive_bytes_total{device="💩0"} 5.7750104e+07
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_compressed_total Network device statistic receive_compressed.
# TYPE node_network_receive_compressed_total counter
node_network_receive_compressed_total{device="docker0"} 0
node_network_receive_compressed_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_compressed_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_compressed_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_compressed_total{device="lo"} 0
node_network_receive_compressed_total{device="lxcbr0"} 0
node_network_receive_compressed_total{device="tun0"} 0
node_network_receive_compressed_total{device="veth4B09XN"} 0
node_network_receive_compressed_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_compressed_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_drop_total Network device statistic receive_drop.
# TYPE node_network_receive_drop_total counter
node_network_receive_drop_total{device="docker0"} 0
node_network_receive_drop_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_drop_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_drop_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_drop_total{device="lo"} 0
node_network_receive_drop_total{device="lxcbr0"} 0
node_network_receive_drop_total{device="tun0"} 0
node_network_receive_drop_total{device="veth4B09XN"} 0
node_network_receive_drop_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_drop_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_errs_total Network device statistic receive_errs.
# TYPE node_network_receive_errs_total counter
node_network_receive_errs_total{device="docker0"} 0
node_network_receive_errs_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_errs_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_errs_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_errs_total{device="lo"} 0
node_network_receive_errs_total{device="lxcbr0"} 0
node_network_receive_errs_total{device="tun0"} 0
node_network_receive_errs_total{device="veth4B09XN"} 0
node_network_receive_errs_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_errs_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_fifo_total Network device statistic receive_fifo.
# TYPE node_network_receive_fifo_total counter
node_network_receive_fifo_total{device="docker0"} 0
node_network_receive_fifo_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_fifo_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_fifo_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_fifo_total{device="lo"} 0
node_network_receive_fifo_total{device="lxcbr0"} 0
node_network_receive_fifo_total{device="tun0"} 0
node_network_receive_fifo_total{device="veth4B09XN"} 0
node_network_receive_fifo_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_fifo_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_frame_total Network device statistic receive_frame.
# TYPE node_network_receive_frame_total counter
node_network_receive_frame_total{device="docker0"} 0
node_network_receive_frame_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_frame_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_frame_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_frame_total{device="lo"} 0
node_network_receive_frame_total{device="lxcbr0"} 0
node_network_receive_frame_total{device="tun0"} 0
node_network_receive_frame_total{device="veth4B09XN"} 0
node_network_receive_frame_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_frame_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_multicast_total Network device statistic receive_multicast.
# TYPE node_network_receive_multicast_total counter
node_network_receive_multicast_total{device="docker0"} 0
node_network_receive_multicast_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_multicast_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_receive_multicast_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_multicast_total{device="lo"} 0
node_network_receive_multicast_total{device="lxcbr0"} 0
node_network_receive_multicast_total{device="tun0"} 0
node_network_receive_multicast_total{device="veth4B09XN"} 0
node_network_receive_multicast_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_receive_multicast_total{device="💩0"} 72
2017-12-21 16:24:23 +01:00
# HELP node_network_receive_packets_total Network device statistic receive_packets.
# TYPE node_network_receive_packets_total counter
node_network_receive_packets_total{device="docker0"} 1.065585e+06
node_network_receive_packets_total{device="eth0"} 5.20993275e+08
2018-04-18 12:48:27 +02:00
node_network_receive_packets_total{device="flannel.1"} 2.28499337e+08
2018-04-16 14:34:39 +02:00
node_network_receive_packets_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_receive_packets_total{device="lo"} 1.832522e+06
node_network_receive_packets_total{device="lxcbr0"} 0
node_network_receive_packets_total{device="tun0"} 24
node_network_receive_packets_total{device="veth4B09XN"} 8
node_network_receive_packets_total{device="wlan0"} 1.3899359e+07
2018-04-18 12:48:27 +02:00
node_network_receive_packets_total{device="💩0"} 105557
2018-07-16 15:08:18 +02:00
# HELP node_network_speed_bytes speed_bytes value of /sys/class/net/<iface>.
# TYPE node_network_speed_bytes gauge
2019-02-06 20:02:48 +01:00
node_network_speed_bytes{device="eth0"} 1.25e+08
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_bytes_total Network device statistic transmit_bytes.
# TYPE node_network_transmit_bytes_total counter
node_network_transmit_bytes_total{device="docker0"} 2.681662018e+09
node_network_transmit_bytes_total{device="eth0"} 9.315587528e+09
2018-04-18 12:48:27 +02:00
node_network_transmit_bytes_total{device="flannel.1"} 2.0758990068e+10
2018-04-16 14:34:39 +02:00
node_network_transmit_bytes_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_bytes_total{device="lo"} 4.35303245e+08
node_network_transmit_bytes_total{device="lxcbr0"} 2.630299e+06
node_network_transmit_bytes_total{device="tun0"} 67120
node_network_transmit_bytes_total{device="veth4B09XN"} 1.943284e+06
node_network_transmit_bytes_total{device="wlan0"} 2.85164936e+09
2018-04-18 12:48:27 +02:00
node_network_transmit_bytes_total{device="💩0"} 4.04570255e+08
2018-04-14 13:58:56 +02:00
# HELP node_network_transmit_carrier_total Network device statistic transmit_carrier.
# TYPE node_network_transmit_carrier_total counter
node_network_transmit_carrier_total{device="docker0"} 0
node_network_transmit_carrier_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_carrier_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_transmit_carrier_total{device="ibr10:30"} 0
2018-04-14 13:58:56 +02:00
node_network_transmit_carrier_total{device="lo"} 0
node_network_transmit_carrier_total{device="lxcbr0"} 0
node_network_transmit_carrier_total{device="tun0"} 0
node_network_transmit_carrier_total{device="veth4B09XN"} 0
node_network_transmit_carrier_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_carrier_total{device="💩0"} 0
2018-04-14 13:58:56 +02:00
# HELP node_network_transmit_colls_total Network device statistic transmit_colls.
# TYPE node_network_transmit_colls_total counter
node_network_transmit_colls_total{device="docker0"} 0
node_network_transmit_colls_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_colls_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_transmit_colls_total{device="ibr10:30"} 0
2018-04-14 13:58:56 +02:00
node_network_transmit_colls_total{device="lo"} 0
node_network_transmit_colls_total{device="lxcbr0"} 0
node_network_transmit_colls_total{device="tun0"} 0
node_network_transmit_colls_total{device="veth4B09XN"} 0
node_network_transmit_colls_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_colls_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_compressed_total Network device statistic transmit_compressed.
# TYPE node_network_transmit_compressed_total counter
node_network_transmit_compressed_total{device="docker0"} 0
node_network_transmit_compressed_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_compressed_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_transmit_compressed_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_compressed_total{device="lo"} 0
node_network_transmit_compressed_total{device="lxcbr0"} 0
node_network_transmit_compressed_total{device="tun0"} 0
node_network_transmit_compressed_total{device="veth4B09XN"} 0
node_network_transmit_compressed_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_compressed_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_drop_total Network device statistic transmit_drop.
# TYPE node_network_transmit_drop_total counter
node_network_transmit_drop_total{device="docker0"} 0
node_network_transmit_drop_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_drop_total{device="flannel.1"} 64
2018-04-16 14:34:39 +02:00
node_network_transmit_drop_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_drop_total{device="lo"} 0
node_network_transmit_drop_total{device="lxcbr0"} 0
node_network_transmit_drop_total{device="tun0"} 0
node_network_transmit_drop_total{device="veth4B09XN"} 0
node_network_transmit_drop_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_drop_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_errs_total Network device statistic transmit_errs.
# TYPE node_network_transmit_errs_total counter
node_network_transmit_errs_total{device="docker0"} 0
node_network_transmit_errs_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_errs_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_transmit_errs_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_errs_total{device="lo"} 0
node_network_transmit_errs_total{device="lxcbr0"} 0
node_network_transmit_errs_total{device="tun0"} 0
node_network_transmit_errs_total{device="veth4B09XN"} 0
node_network_transmit_errs_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_errs_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_fifo_total Network device statistic transmit_fifo.
# TYPE node_network_transmit_fifo_total counter
node_network_transmit_fifo_total{device="docker0"} 0
node_network_transmit_fifo_total{device="eth0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_fifo_total{device="flannel.1"} 0
2018-04-16 14:34:39 +02:00
node_network_transmit_fifo_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_fifo_total{device="lo"} 0
node_network_transmit_fifo_total{device="lxcbr0"} 0
node_network_transmit_fifo_total{device="tun0"} 0
node_network_transmit_fifo_total{device="veth4B09XN"} 0
node_network_transmit_fifo_total{device="wlan0"} 0
2018-04-18 12:48:27 +02:00
node_network_transmit_fifo_total{device="💩0"} 0
2017-12-21 16:24:23 +01:00
# HELP node_network_transmit_packets_total Network device statistic transmit_packets.
# TYPE node_network_transmit_packets_total counter
node_network_transmit_packets_total{device="docker0"} 1.929779e+06
node_network_transmit_packets_total{device="eth0"} 4.3451486e+07
2018-04-18 12:48:27 +02:00
node_network_transmit_packets_total{device="flannel.1"} 2.58369223e+08
2018-04-16 14:34:39 +02:00
node_network_transmit_packets_total{device="ibr10:30"} 0
2017-12-21 16:24:23 +01:00
node_network_transmit_packets_total{device="lo"} 1.832522e+06
node_network_transmit_packets_total{device="lxcbr0"} 28339
node_network_transmit_packets_total{device="tun0"} 934
node_network_transmit_packets_total{device="veth4B09XN"} 10640
node_network_transmit_packets_total{device="wlan0"} 1.17262e+07
2018-04-18 12:48:27 +02:00
node_network_transmit_packets_total{device="💩0"} 304261
2018-07-16 15:08:18 +02:00
# HELP node_network_transmit_queue_length transmit_queue_length value of /sys/class/net/<iface>.
# TYPE node_network_transmit_queue_length gauge
2019-02-06 20:02:48 +01:00
node_network_transmit_queue_length{device="eth0"} 1000
2019-02-07 15:59:32 +01:00
# HELP node_network_up Value is 1 if operstate is 'up', 0 otherwise.
2018-07-16 15:08:18 +02:00
# TYPE node_network_up gauge
2019-02-07 15:59:32 +01:00
node_network_up{device="eth0"} 1
2015-12-20 01:57:52 +01:00
# HELP node_nf_conntrack_entries Number of currently allocated flow entries for connection tracking.
# TYPE node_nf_conntrack_entries gauge
node_nf_conntrack_entries 123
# HELP node_nf_conntrack_entries_limit Maximum size of connection tracking table.
# TYPE node_nf_conntrack_entries_limit gauge
node_nf_conntrack_entries_limit 65536
2018-02-21 07:25:41 +01:00
# HELP node_nfs_connections_total Total number of NFSd TCP connections.
# TYPE node_nfs_connections_total counter
node_nfs_connections_total 45
# HELP node_nfs_packets_total Total NFSd network packets (sent+received) by protocol type.
# TYPE node_nfs_packets_total counter
node_nfs_packets_total{protocol="tcp"} 69
node_nfs_packets_total{protocol="udp"} 70
# HELP node_nfs_requests_total Number of NFS procedures invoked.
# TYPE node_nfs_requests_total counter
node_nfs_requests_total{method="Access",proto="3"} 1.17661341e+08
node_nfs_requests_total{method="Access",proto="4"} 58
node_nfs_requests_total{method="Allocate",proto="4"} 0
node_nfs_requests_total{method="BindConnToSession",proto="4"} 0
node_nfs_requests_total{method="Clone",proto="4"} 0
node_nfs_requests_total{method="Close",proto="4"} 28
node_nfs_requests_total{method="Commit",proto="3"} 23729
node_nfs_requests_total{method="Commit",proto="4"} 83
node_nfs_requests_total{method="Create",proto="2"} 52
node_nfs_requests_total{method="Create",proto="3"} 2.993289e+06
node_nfs_requests_total{method="Create",proto="4"} 15
node_nfs_requests_total{method="CreateSession",proto="4"} 32
node_nfs_requests_total{method="DeAllocate",proto="4"} 0
node_nfs_requests_total{method="DelegReturn",proto="4"} 97
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="DestroyClientID",proto="4"} 0
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="DestroySession",proto="4"} 67
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="ExchangeID",proto="4"} 58
node_nfs_requests_total{method="FreeStateID",proto="4"} 0
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="FsInfo",proto="3"} 2
node_nfs_requests_total{method="FsInfo",proto="4"} 68
node_nfs_requests_total{method="FsLocations",proto="4"} 32
node_nfs_requests_total{method="FsStat",proto="2"} 82
node_nfs_requests_total{method="FsStat",proto="3"} 13332
node_nfs_requests_total{method="FsidPresent",proto="4"} 11
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="GetACL",proto="4"} 36
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="GetAttr",proto="2"} 57
node_nfs_requests_total{method="GetAttr",proto="3"} 1.061909262e+09
node_nfs_requests_total{method="GetDeviceInfo",proto="4"} 1
node_nfs_requests_total{method="GetDeviceList",proto="4"} 0
node_nfs_requests_total{method="GetLeaseTime",proto="4"} 28
node_nfs_requests_total{method="Getattr",proto="4"} 88
node_nfs_requests_total{method="LayoutCommit",proto="4"} 26
node_nfs_requests_total{method="LayoutGet",proto="4"} 90
node_nfs_requests_total{method="LayoutReturn",proto="4"} 0
node_nfs_requests_total{method="LayoutStats",proto="4"} 0
node_nfs_requests_total{method="Link",proto="2"} 17
node_nfs_requests_total{method="Link",proto="3"} 0
node_nfs_requests_total{method="Link",proto="4"} 21
node_nfs_requests_total{method="Lock",proto="4"} 39
node_nfs_requests_total{method="Lockt",proto="4"} 68
node_nfs_requests_total{method="Locku",proto="4"} 59
node_nfs_requests_total{method="Lookup",proto="2"} 71
node_nfs_requests_total{method="Lookup",proto="3"} 4.077635e+06
node_nfs_requests_total{method="Lookup",proto="4"} 29
node_nfs_requests_total{method="LookupRoot",proto="4"} 74
node_nfs_requests_total{method="MkDir",proto="2"} 50
node_nfs_requests_total{method="MkDir",proto="3"} 590
node_nfs_requests_total{method="MkNod",proto="3"} 0
node_nfs_requests_total{method="Null",proto="2"} 16
node_nfs_requests_total{method="Null",proto="3"} 0
node_nfs_requests_total{method="Null",proto="4"} 98
node_nfs_requests_total{method="Open",proto="4"} 85
node_nfs_requests_total{method="OpenConfirm",proto="4"} 23
node_nfs_requests_total{method="OpenDowngrade",proto="4"} 1
node_nfs_requests_total{method="OpenNoattr",proto="4"} 24
node_nfs_requests_total{method="PathConf",proto="3"} 1
node_nfs_requests_total{method="Pathconf",proto="4"} 53
node_nfs_requests_total{method="Read",proto="2"} 45
node_nfs_requests_total{method="Read",proto="3"} 2.9391916e+07
node_nfs_requests_total{method="Read",proto="4"} 51
node_nfs_requests_total{method="ReadDir",proto="2"} 70
node_nfs_requests_total{method="ReadDir",proto="3"} 3983
node_nfs_requests_total{method="ReadDir",proto="4"} 66
node_nfs_requests_total{method="ReadDirPlus",proto="3"} 92385
node_nfs_requests_total{method="ReadLink",proto="2"} 73
node_nfs_requests_total{method="ReadLink",proto="3"} 5
node_nfs_requests_total{method="ReadLink",proto="4"} 54
node_nfs_requests_total{method="ReclaimComplete",proto="4"} 35
node_nfs_requests_total{method="ReleaseLockowner",proto="4"} 85
node_nfs_requests_total{method="Remove",proto="2"} 83
node_nfs_requests_total{method="Remove",proto="3"} 7815
node_nfs_requests_total{method="Remove",proto="4"} 69
node_nfs_requests_total{method="Rename",proto="2"} 61
node_nfs_requests_total{method="Rename",proto="3"} 1130
node_nfs_requests_total{method="Rename",proto="4"} 96
node_nfs_requests_total{method="Renew",proto="4"} 83
node_nfs_requests_total{method="RmDir",proto="2"} 23
node_nfs_requests_total{method="RmDir",proto="3"} 15
node_nfs_requests_total{method="Root",proto="2"} 52
node_nfs_requests_total{method="Secinfo",proto="4"} 81
node_nfs_requests_total{method="SecinfoNoName",proto="4"} 0
node_nfs_requests_total{method="Seek",proto="4"} 0
node_nfs_requests_total{method="Sequence",proto="4"} 13
node_nfs_requests_total{method="ServerCaps",proto="4"} 56
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="SetACL",proto="4"} 49
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="SetAttr",proto="2"} 74
node_nfs_requests_total{method="SetAttr",proto="3"} 48906
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="SetClientID",proto="4"} 12
node_nfs_requests_total{method="SetClientIDConfirm",proto="4"} 84
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="Setattr",proto="4"} 73
node_nfs_requests_total{method="StatFs",proto="4"} 86
node_nfs_requests_total{method="SymLink",proto="2"} 53
node_nfs_requests_total{method="SymLink",proto="3"} 0
node_nfs_requests_total{method="Symlink",proto="4"} 84
2018-03-22 22:25:37 +01:00
node_nfs_requests_total{method="TestStateID",proto="4"} 0
2018-02-21 07:25:41 +01:00
node_nfs_requests_total{method="WrCache",proto="2"} 86
node_nfs_requests_total{method="Write",proto="2"} 0
node_nfs_requests_total{method="Write",proto="3"} 2.570425e+06
node_nfs_requests_total{method="Write",proto="4"} 54
2018-02-12 18:53:31 +01:00
# HELP node_nfs_rpc_authentication_refreshes_total Number of RPC authentication refreshes performed.
# TYPE node_nfs_rpc_authentication_refreshes_total counter
node_nfs_rpc_authentication_refreshes_total 1.218815394e+09
# HELP node_nfs_rpc_retransmissions_total Number of RPC transmissions performed.
# TYPE node_nfs_rpc_retransmissions_total counter
node_nfs_rpc_retransmissions_total 374636
2018-02-21 07:25:41 +01:00
# HELP node_nfs_rpcs_total Total number of RPCs performed.
# TYPE node_nfs_rpcs_total counter
node_nfs_rpcs_total 1.218785755e+09
2018-02-12 17:56:05 +01:00
# HELP node_nfsd_connections_total Total number of NFSd TCP connections.
# TYPE node_nfsd_connections_total counter
node_nfsd_connections_total 1
# HELP node_nfsd_disk_bytes_read_total Total NFSd bytes read.
# TYPE node_nfsd_disk_bytes_read_total counter
node_nfsd_disk_bytes_read_total 1.572864e+08
# HELP node_nfsd_disk_bytes_written_total Total NFSd bytes written.
# TYPE node_nfsd_disk_bytes_written_total counter
node_nfsd_disk_bytes_written_total 72864
# HELP node_nfsd_file_handles_stale_total Total number of NFSd stale file handles
# TYPE node_nfsd_file_handles_stale_total counter
node_nfsd_file_handles_stale_total 0
2018-02-21 07:25:41 +01:00
# HELP node_nfsd_packets_total Total NFSd network packets (sent+received) by protocol type.
2018-02-12 17:56:05 +01:00
# TYPE node_nfsd_packets_total counter
node_nfsd_packets_total{proto="tcp"} 917
node_nfsd_packets_total{proto="udp"} 55
# HELP node_nfsd_read_ahead_cache_not_found_total Total number of NFSd read ahead cache not found.
# TYPE node_nfsd_read_ahead_cache_not_found_total counter
node_nfsd_read_ahead_cache_not_found_total 0
# HELP node_nfsd_read_ahead_cache_size_blocks How large the read ahead cache is in blocks.
# TYPE node_nfsd_read_ahead_cache_size_blocks gauge
node_nfsd_read_ahead_cache_size_blocks 32
# HELP node_nfsd_reply_cache_hits_total Total number of NFSd Reply Cache hits (client lost server response).
# TYPE node_nfsd_reply_cache_hits_total counter
node_nfsd_reply_cache_hits_total 0
# HELP node_nfsd_reply_cache_misses_total Total number of NFSd Reply Cache an operation that requires caching (idempotent).
# TYPE node_nfsd_reply_cache_misses_total counter
node_nfsd_reply_cache_misses_total 6
# HELP node_nfsd_reply_cache_nocache_total Total number of NFSd Reply Cache non-idempotent operations (rename/delete/…).
# TYPE node_nfsd_reply_cache_nocache_total counter
node_nfsd_reply_cache_nocache_total 18622
# HELP node_nfsd_requests_total Total number NFSd Requests by method and protocol.
# TYPE node_nfsd_requests_total counter
2018-02-21 07:25:41 +01:00
node_nfsd_requests_total{method="Access",proto="3"} 111
node_nfsd_requests_total{method="Access",proto="4"} 1098
node_nfsd_requests_total{method="Close",proto="4"} 2
node_nfsd_requests_total{method="Commit",proto="3"} 0
node_nfsd_requests_total{method="Commit",proto="4"} 0
node_nfsd_requests_total{method="Create",proto="2"} 0
node_nfsd_requests_total{method="Create",proto="3"} 0
node_nfsd_requests_total{method="Create",proto="4"} 0
node_nfsd_requests_total{method="DelegPurge",proto="4"} 0
node_nfsd_requests_total{method="DelegReturn",proto="4"} 0
node_nfsd_requests_total{method="FsInfo",proto="3"} 2
node_nfsd_requests_total{method="FsStat",proto="2"} 2
node_nfsd_requests_total{method="FsStat",proto="3"} 0
node_nfsd_requests_total{method="GetAttr",proto="2"} 69
node_nfsd_requests_total{method="GetAttr",proto="3"} 112
node_nfsd_requests_total{method="GetAttr",proto="4"} 8179
node_nfsd_requests_total{method="GetFH",proto="4"} 5896
node_nfsd_requests_total{method="Link",proto="2"} 0
node_nfsd_requests_total{method="Link",proto="3"} 0
node_nfsd_requests_total{method="Link",proto="4"} 0
node_nfsd_requests_total{method="Lock",proto="4"} 0
node_nfsd_requests_total{method="Lockt",proto="4"} 0
node_nfsd_requests_total{method="Locku",proto="4"} 0
node_nfsd_requests_total{method="Lookup",proto="2"} 4410
node_nfsd_requests_total{method="Lookup",proto="3"} 2719
node_nfsd_requests_total{method="Lookup",proto="4"} 5900
node_nfsd_requests_total{method="LookupRoot",proto="4"} 0
node_nfsd_requests_total{method="MkDir",proto="2"} 0
node_nfsd_requests_total{method="MkDir",proto="3"} 0
node_nfsd_requests_total{method="MkNod",proto="3"} 0
node_nfsd_requests_total{method="Nverify",proto="4"} 0
node_nfsd_requests_total{method="Open",proto="4"} 2
node_nfsd_requests_total{method="OpenAttr",proto="4"} 0
node_nfsd_requests_total{method="OpenConfirm",proto="4"} 2
node_nfsd_requests_total{method="OpenDgrd",proto="4"} 0
node_nfsd_requests_total{method="PathConf",proto="3"} 1
node_nfsd_requests_total{method="PutFH",proto="4"} 9609
node_nfsd_requests_total{method="Read",proto="2"} 0
node_nfsd_requests_total{method="Read",proto="3"} 0
node_nfsd_requests_total{method="Read",proto="4"} 150
node_nfsd_requests_total{method="ReadDir",proto="2"} 99
node_nfsd_requests_total{method="ReadDir",proto="3"} 27
node_nfsd_requests_total{method="ReadDir",proto="4"} 1272
node_nfsd_requests_total{method="ReadDirPlus",proto="3"} 216
node_nfsd_requests_total{method="ReadLink",proto="2"} 0
node_nfsd_requests_total{method="ReadLink",proto="3"} 0
node_nfsd_requests_total{method="ReadLink",proto="4"} 0
node_nfsd_requests_total{method="RelLockOwner",proto="4"} 0
node_nfsd_requests_total{method="Remove",proto="2"} 0
node_nfsd_requests_total{method="Remove",proto="3"} 0
node_nfsd_requests_total{method="Remove",proto="4"} 0
node_nfsd_requests_total{method="Rename",proto="2"} 0
node_nfsd_requests_total{method="Rename",proto="3"} 0
node_nfsd_requests_total{method="Rename",proto="4"} 0
node_nfsd_requests_total{method="Renew",proto="4"} 1236
node_nfsd_requests_total{method="RestoreFH",proto="4"} 0
node_nfsd_requests_total{method="RmDir",proto="2"} 0
node_nfsd_requests_total{method="RmDir",proto="3"} 0
node_nfsd_requests_total{method="Root",proto="2"} 0
node_nfsd_requests_total{method="SaveFH",proto="4"} 0
node_nfsd_requests_total{method="SecInfo",proto="4"} 0
node_nfsd_requests_total{method="SetAttr",proto="2"} 0
node_nfsd_requests_total{method="SetAttr",proto="3"} 0
node_nfsd_requests_total{method="SetAttr",proto="4"} 0
node_nfsd_requests_total{method="SymLink",proto="2"} 0
node_nfsd_requests_total{method="SymLink",proto="3"} 0
node_nfsd_requests_total{method="Verify",proto="4"} 3
node_nfsd_requests_total{method="WrCache",proto="2"} 0
node_nfsd_requests_total{method="Write",proto="2"} 0
node_nfsd_requests_total{method="Write",proto="3"} 0
node_nfsd_requests_total{method="Write",proto="4"} 3
2018-02-12 17:56:05 +01:00
# HELP node_nfsd_rpc_errors_total Total number of NFSd RPC errors by error type.
# TYPE node_nfsd_rpc_errors_total counter
node_nfsd_rpc_errors_total{error="auth"} 2
node_nfsd_rpc_errors_total{error="cInt"} 0
node_nfsd_rpc_errors_total{error="fmt"} 1
# HELP node_nfsd_server_rpcs_total Total number of NFSd RPCs.
2018-02-13 17:03:22 +01:00
# TYPE node_nfsd_server_rpcs_total counter
2018-02-12 17:56:05 +01:00
node_nfsd_server_rpcs_total 18628
# HELP node_nfsd_server_threads Total number of NFSd kernel threads that are running.
# TYPE node_nfsd_server_threads gauge
node_nfsd_server_threads 8
2019-09-18 21:31:15 +02:00
# HELP node_power_supply_capacity capacity value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_capacity gauge
node_power_supply_capacity{power_supply="BAT0"} 81
# HELP node_power_supply_cyclecount cyclecount value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_cyclecount gauge
node_power_supply_cyclecount{power_supply="BAT0"} 0
# HELP node_power_supply_energy_full energy_full value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_energy_full gauge
node_power_supply_energy_full{power_supply="BAT0"} 45.07
# HELP node_power_supply_energy_full_design energy_full_design value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_energy_full_design gauge
node_power_supply_energy_full_design{power_supply="BAT0"} 47.52
# HELP node_power_supply_energy_watthour energy_watthour value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_energy_watthour gauge
node_power_supply_energy_watthour{power_supply="BAT0"} 36.58
# HELP node_power_supply_info info of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_info gauge
node_power_supply_info{power_supply="AC",type="Mains"} 1
node_power_supply_info{capacity_level="Normal",manufacturer="LGC",model_name="LNV-45N1",power_supply="BAT0",serial_number="38109",status="Discharging",technology="Li-ion",type="Battery"} 1
# HELP node_power_supply_online online value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_online gauge
node_power_supply_online{power_supply="AC"} 0
# HELP node_power_supply_power_watt power_watt value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_power_watt gauge
node_power_supply_power_watt{power_supply="BAT0"} 5.002
# HELP node_power_supply_present present value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_present gauge
node_power_supply_present{power_supply="BAT0"} 1
# HELP node_power_supply_voltage_min_design voltage_min_design value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_voltage_min_design gauge
node_power_supply_voltage_min_design{power_supply="BAT0"} 10.8
# HELP node_power_supply_voltage_volt voltage_volt value of /sys/class/power_supply/<power_supply>.
# TYPE node_power_supply_voltage_volt gauge
node_power_supply_voltage_volt{power_supply="BAT0"} 11.66
2019-04-18 12:19:20 +02:00
# HELP node_pressure_cpu_waiting_seconds_total Total time in seconds that processes have waited for CPU time
# TYPE node_pressure_cpu_waiting_seconds_total counter
node_pressure_cpu_waiting_seconds_total 14.036781000000001
# HELP node_pressure_io_stalled_seconds_total Total time in seconds no process could make progress due to IO congestion
# TYPE node_pressure_io_stalled_seconds_total counter
node_pressure_io_stalled_seconds_total 159.229614
# HELP node_pressure_io_waiting_seconds_total Total time in seconds that processes have waited due to IO congestion
# TYPE node_pressure_io_waiting_seconds_total counter
node_pressure_io_waiting_seconds_total 159.886802
# HELP node_pressure_memory_stalled_seconds_total Total time in seconds no process could make progress due to memory congestion
# TYPE node_pressure_memory_stalled_seconds_total counter
node_pressure_memory_stalled_seconds_total 0
# HELP node_pressure_memory_waiting_seconds_total Total time in seconds that processes have waited for memory
# TYPE node_pressure_memory_waiting_seconds_total counter
node_pressure_memory_waiting_seconds_total 0
2018-06-05 19:38:32 +02:00
# HELP node_processes_max_processes Number of max PIDs limit
# TYPE node_processes_max_processes gauge
node_processes_max_processes 123
# HELP node_processes_max_threads Limit of threads in the system
# TYPE node_processes_max_threads gauge
node_processes_max_threads 7801
# HELP node_processes_pids Number of PIDs
# TYPE node_processes_pids gauge
node_processes_pids 1
# HELP node_processes_state Number of processes in each state.
# TYPE node_processes_state gauge
node_processes_state{state="S"} 1
# HELP node_processes_threads Allocated threads in system
# TYPE node_processes_threads gauge
node_processes_threads 1
2015-09-26 20:54:49 +02:00
# HELP node_procs_blocked Number of processes blocked waiting for I/O to complete.
# TYPE node_procs_blocked gauge
node_procs_blocked 0
# HELP node_procs_running Number of processes in runnable state.
# TYPE node_procs_running gauge
node_procs_running 2
2020-06-04 19:01:34 +02:00
# HELP node_qdisc_backlog Number of bytes currently in queue to be sent.
# TYPE node_qdisc_backlog gauge
node_qdisc_backlog{device="eth0",kind="pfifo_fast"} 0
node_qdisc_backlog{device="wlan0",kind="fq"} 0
2017-05-23 11:55:50 +02:00
# HELP node_qdisc_bytes_total Number of bytes sent.
# TYPE node_qdisc_bytes_total counter
node_qdisc_bytes_total{device="eth0",kind="pfifo_fast"} 83
node_qdisc_bytes_total{device="wlan0",kind="fq"} 42
2020-06-04 19:01:34 +02:00
# HELP node_qdisc_current_queue_length Number of packets currently in queue to be sent.
# TYPE node_qdisc_current_queue_length gauge
node_qdisc_current_queue_length{device="eth0",kind="pfifo_fast"} 0
node_qdisc_current_queue_length{device="wlan0",kind="fq"} 0
2017-05-23 11:55:50 +02:00
# HELP node_qdisc_drops_total Number of packets dropped.
# TYPE node_qdisc_drops_total counter
node_qdisc_drops_total{device="eth0",kind="pfifo_fast"} 0
node_qdisc_drops_total{device="wlan0",kind="fq"} 1
# HELP node_qdisc_overlimits_total Number of overlimit packets.
# TYPE node_qdisc_overlimits_total counter
node_qdisc_overlimits_total{device="eth0",kind="pfifo_fast"} 0
node_qdisc_overlimits_total{device="wlan0",kind="fq"} 0
# HELP node_qdisc_packets_total Number of packets sent.
# TYPE node_qdisc_packets_total counter
node_qdisc_packets_total{device="eth0",kind="pfifo_fast"} 83
node_qdisc_packets_total{device="wlan0",kind="fq"} 42
# HELP node_qdisc_requeues_total Number of packets dequeued, not transmitted, and requeued.
# TYPE node_qdisc_requeues_total counter
node_qdisc_requeues_total{device="eth0",kind="pfifo_fast"} 2
node_qdisc_requeues_total{device="wlan0",kind="fq"} 1
2020-01-17 13:32:16 +01:00
# HELP node_rapl_core_joules_total Current RAPL core value in joules
# TYPE node_rapl_core_joules_total counter
node_rapl_core_joules_total{index="0"} 118821.284256
# HELP node_rapl_package_joules_total Current RAPL package value in joules
# TYPE node_rapl_package_joules_total counter
node_rapl_package_joules_total{index="0"} 240422.366267
2019-07-10 09:16:24 +02:00
# HELP node_schedstat_running_seconds_total Number of seconds CPU spent running a process.
# TYPE node_schedstat_running_seconds_total counter
2019-08-06 19:08:06 +02:00
node_schedstat_running_seconds_total{cpu="0"} 2.045936778163039e+06
node_schedstat_running_seconds_total{cpu="1"} 1.904686152592476e+06
2019-07-10 09:16:24 +02:00
# HELP node_schedstat_timeslices_total Number of timeslices executed by CPU.
# TYPE node_schedstat_timeslices_total counter
node_schedstat_timeslices_total{cpu="0"} 4.767485306e+09
node_schedstat_timeslices_total{cpu="1"} 5.145567945e+09
# HELP node_schedstat_waiting_seconds_total Number of seconds spent by processing waiting for this CPU.
# TYPE node_schedstat_waiting_seconds_total counter
2019-08-06 19:08:06 +02:00
node_schedstat_waiting_seconds_total{cpu="0"} 343796.328169361
node_schedstat_waiting_seconds_total{cpu="1"} 364107.263788241
2017-03-16 18:21:00 +01:00
# HELP node_scrape_collector_duration_seconds node_exporter: Duration of a collector scrape.
# TYPE node_scrape_collector_duration_seconds gauge
# HELP node_scrape_collector_success node_exporter: Whether a collector succeeded.
# TYPE node_scrape_collector_success gauge
2017-04-11 17:45:19 +02:00
node_scrape_collector_success{collector="arp"} 1
Add bcache collector (#597)
* Add bcache collector for Linux
This collector gathers metrics related to the Linux block cache
(bcache) from sysfs.
* Removed commented out code
* Use project comment style
* Add _sectors to metric name to indicate unit
* Really use project comment style
* Rename bcache.go to bcache_linux.go
* Keep collector namespace clean
Rename:
- metric -> bcacheMetric
- periodStatsToMetrics -> bcachePeriodStatsToMetric
* Shorten slice initialization
* Change label names to backing_device, cache_device
* Remove five minute metrics (keep only total)
* Include units in additional metric names
* Enable bcache collector by default
* Provide metrics in seconds, not nanoseconds
* remove metrics with label "all"
* Add fixtures, update end-to-end for bcache collector
* Move fixtures/sys into tar.gz
This changeset moves the collector/fixtures/sys directory into
collector/fixtures/sys.tar.gz and tweaks the Makefile to unpack the
tarball before tests are run.
The reason for this change is that Windows does not allow colons in a
path (colons are present in some of the bcache fixture files), nor can
it (out of the box) deal with pathnames longer than 260 characters
(which we would be increasingly likely to hit if we tried to replace
colons with longer codes that are guaranteed not the turn up in regular
file names).
* Add ttar: plain text archive, replacement for tar
This changeset adds ttar, a plain text replacement for tar, and uses it
for the sysfs fixture archive. The syntax is loosely based on tar(1).
Using a plain text archive makes it possible to review changes without
downloading and extracting the archive. Also, when working on the repo,
git diff and git log become useful again, allowing a committer to verify
and track changes over time.
The code is written in bash, because bash is available out of the box on
all major flavors of Linux and on macOS. The feature set used is
restricted to bash version 3.2 because that is what Apple is still
shipping.
The programm also works on Windows if bash is installed. Obviously, it
does not solve the Windows limitations (path length limited to 260
characters, no symbolic links) that prompted the move to an archive
format in the first place.
2017-07-07 07:20:18 +02:00
node_scrape_collector_success{collector="bcache"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="bonding"} 1
2020-02-19 15:48:51 +01:00
node_scrape_collector_success{collector="btrfs"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="buddyinfo"} 1
node_scrape_collector_success{collector="conntrack"} 1
2017-06-13 11:21:53 +02:00
node_scrape_collector_success{collector="cpu"} 1
2019-02-19 17:22:54 +01:00
node_scrape_collector_success{collector="cpufreq"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="diskstats"} 1
node_scrape_collector_success{collector="drbd"} 1
node_scrape_collector_success{collector="edac"} 1
node_scrape_collector_success{collector="entropy"} 1
node_scrape_collector_success{collector="filefd"} 1
node_scrape_collector_success{collector="hwmon"} 1
node_scrape_collector_success{collector="infiniband"} 1
2017-11-02 09:59:46 +01:00
node_scrape_collector_success{collector="interrupts"} 1
2017-07-26 15:20:28 +02:00
node_scrape_collector_success{collector="ipvs"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="ksmd"} 1
node_scrape_collector_success{collector="loadavg"} 1
node_scrape_collector_success{collector="mdadm"} 1
node_scrape_collector_success{collector="meminfo"} 1
node_scrape_collector_success{collector="meminfo_numa"} 1
node_scrape_collector_success{collector="mountstats"} 1
2018-07-16 15:08:18 +02:00
node_scrape_collector_success{collector="netclass"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="netdev"} 1
node_scrape_collector_success{collector="netstat"} 1
node_scrape_collector_success{collector="nfs"} 1
2018-02-12 17:56:05 +01:00
node_scrape_collector_success{collector="nfsd"} 1
2019-09-18 21:31:15 +02:00
node_scrape_collector_success{collector="powersupplyclass"} 1
2019-04-18 12:19:20 +02:00
node_scrape_collector_success{collector="pressure"} 1
2018-06-05 19:38:32 +02:00
node_scrape_collector_success{collector="processes"} 1
2017-05-23 11:55:50 +02:00
node_scrape_collector_success{collector="qdisc"} 1
2020-01-17 13:32:16 +01:00
node_scrape_collector_success{collector="rapl"} 1
2019-07-10 09:16:24 +02:00
node_scrape_collector_success{collector="schedstat"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="sockstat"} 1
2019-12-30 01:36:10 +01:00
node_scrape_collector_success{collector="softnet"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="stat"} 1
node_scrape_collector_success{collector="textfile"} 1
2019-08-04 12:56:36 +02:00
node_scrape_collector_success{collector="thermal_zone"} 1
2020-03-31 10:46:32 +02:00
node_scrape_collector_success{collector="udp_queues"} 1
2018-03-29 17:34:52 +02:00
node_scrape_collector_success{collector="vmstat"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="wifi"} 1
2017-04-22 00:19:35 +02:00
node_scrape_collector_success{collector="xfs"} 1
2017-03-16 18:21:00 +01:00
node_scrape_collector_success{collector="zfs"} 1
2019-11-25 20:41:38 +01:00
# HELP node_sockstat_FRAG6_inuse Number of FRAG6 sockets in state inuse.
# TYPE node_sockstat_FRAG6_inuse gauge
node_sockstat_FRAG6_inuse 0
# HELP node_sockstat_FRAG6_memory Number of FRAG6 sockets in state memory.
# TYPE node_sockstat_FRAG6_memory gauge
node_sockstat_FRAG6_memory 0
2015-09-26 20:54:49 +02:00
# HELP node_sockstat_FRAG_inuse Number of FRAG sockets in state inuse.
# TYPE node_sockstat_FRAG_inuse gauge
node_sockstat_FRAG_inuse 0
# HELP node_sockstat_FRAG_memory Number of FRAG sockets in state memory.
# TYPE node_sockstat_FRAG_memory gauge
node_sockstat_FRAG_memory 0
2019-11-25 20:41:38 +01:00
# HELP node_sockstat_RAW6_inuse Number of RAW6 sockets in state inuse.
# TYPE node_sockstat_RAW6_inuse gauge
node_sockstat_RAW6_inuse 1
2015-09-26 20:54:49 +02:00
# HELP node_sockstat_RAW_inuse Number of RAW sockets in state inuse.
# TYPE node_sockstat_RAW_inuse gauge
node_sockstat_RAW_inuse 0
2019-11-25 20:41:38 +01:00
# HELP node_sockstat_TCP6_inuse Number of TCP6 sockets in state inuse.
# TYPE node_sockstat_TCP6_inuse gauge
node_sockstat_TCP6_inuse 17
2015-09-26 20:54:49 +02:00
# HELP node_sockstat_TCP_alloc Number of TCP sockets in state alloc.
# TYPE node_sockstat_TCP_alloc gauge
node_sockstat_TCP_alloc 17
# HELP node_sockstat_TCP_inuse Number of TCP sockets in state inuse.
# TYPE node_sockstat_TCP_inuse gauge
node_sockstat_TCP_inuse 4
# HELP node_sockstat_TCP_mem Number of TCP sockets in state mem.
# TYPE node_sockstat_TCP_mem gauge
node_sockstat_TCP_mem 1
# HELP node_sockstat_TCP_mem_bytes Number of TCP sockets in state mem_bytes.
# TYPE node_sockstat_TCP_mem_bytes gauge
node_sockstat_TCP_mem_bytes 4096
# HELP node_sockstat_TCP_orphan Number of TCP sockets in state orphan.
# TYPE node_sockstat_TCP_orphan gauge
node_sockstat_TCP_orphan 0
# HELP node_sockstat_TCP_tw Number of TCP sockets in state tw.
# TYPE node_sockstat_TCP_tw gauge
node_sockstat_TCP_tw 4
2019-11-25 20:41:38 +01:00
# HELP node_sockstat_UDP6_inuse Number of UDP6 sockets in state inuse.
# TYPE node_sockstat_UDP6_inuse gauge
node_sockstat_UDP6_inuse 9
# HELP node_sockstat_UDPLITE6_inuse Number of UDPLITE6 sockets in state inuse.
# TYPE node_sockstat_UDPLITE6_inuse gauge
node_sockstat_UDPLITE6_inuse 0
2015-09-26 20:54:49 +02:00
# HELP node_sockstat_UDPLITE_inuse Number of UDPLITE sockets in state inuse.
# TYPE node_sockstat_UDPLITE_inuse gauge
node_sockstat_UDPLITE_inuse 0
# HELP node_sockstat_UDP_inuse Number of UDP sockets in state inuse.
# TYPE node_sockstat_UDP_inuse gauge
node_sockstat_UDP_inuse 0
# HELP node_sockstat_UDP_mem Number of UDP sockets in state mem.
# TYPE node_sockstat_UDP_mem gauge
node_sockstat_UDP_mem 0
# HELP node_sockstat_UDP_mem_bytes Number of UDP sockets in state mem_bytes.
# TYPE node_sockstat_UDP_mem_bytes gauge
node_sockstat_UDP_mem_bytes 0
2019-11-25 20:41:38 +01:00
# HELP node_sockstat_sockets_used Number of IPv4 sockets in use.
2015-09-26 20:54:49 +02:00
# TYPE node_sockstat_sockets_used gauge
node_sockstat_sockets_used 229
2019-12-30 01:36:10 +01:00
# HELP node_softnet_dropped_total Number of dropped packets
# TYPE node_softnet_dropped_total counter
node_softnet_dropped_total{cpu="0"} 0
node_softnet_dropped_total{cpu="1"} 41
node_softnet_dropped_total{cpu="2"} 0
node_softnet_dropped_total{cpu="3"} 0
# HELP node_softnet_processed_total Number of processed packets
# TYPE node_softnet_processed_total counter
node_softnet_processed_total{cpu="0"} 299641
node_softnet_processed_total{cpu="1"} 916354
node_softnet_processed_total{cpu="2"} 5.577791e+06
node_softnet_processed_total{cpu="3"} 3.113785e+06
# HELP node_softnet_times_squeezed_total Number of times processing packets ran out of quota
# TYPE node_softnet_times_squeezed_total counter
node_softnet_times_squeezed_total{cpu="0"} 1
node_softnet_times_squeezed_total{cpu="1"} 10
node_softnet_times_squeezed_total{cpu="2"} 85
node_softnet_times_squeezed_total{cpu="3"} 50
2018-01-22 14:02:19 +01:00
# HELP node_textfile_mtime_seconds Unixtime mtime of textfiles successfully read.
# TYPE node_textfile_mtime_seconds gauge
2015-09-26 20:54:49 +02:00
# HELP node_textfile_scrape_error 1 if there was an error opening or reading a file, 0 otherwise
# TYPE node_textfile_scrape_error gauge
node_textfile_scrape_error 0
2019-08-04 12:56:36 +02:00
# HELP node_thermal_zone_temp Zone temperature in Celsius
# TYPE node_thermal_zone_temp gauge
node_thermal_zone_temp{type="cpu-thermal",zone="0"} 12.376
2020-03-31 10:46:32 +02:00
# HELP node_udp_queues Number of allocated memory in the kernel for UDP datagrams in bytes.
# TYPE node_udp_queues gauge
node_udp_queues{ip="v4",queue="rx"} 0
node_udp_queues{ip="v4",queue="tx"} 21
2018-03-29 20:20:21 +02:00
# HELP node_vmstat_oom_kill /proc/vmstat information field oom_kill.
# TYPE node_vmstat_oom_kill untyped
node_vmstat_oom_kill 0
2018-03-29 17:34:52 +02:00
# HELP node_vmstat_pgfault /proc/vmstat information field pgfault.
# TYPE node_vmstat_pgfault untyped
node_vmstat_pgfault 2.320168809e+09
# HELP node_vmstat_pgmajfault /proc/vmstat information field pgmajfault.
# TYPE node_vmstat_pgmajfault untyped
node_vmstat_pgmajfault 507162
# HELP node_vmstat_pgpgin /proc/vmstat information field pgpgin.
# TYPE node_vmstat_pgpgin untyped
node_vmstat_pgpgin 7.344136e+06
# HELP node_vmstat_pgpgout /proc/vmstat information field pgpgout.
# TYPE node_vmstat_pgpgout untyped
node_vmstat_pgpgout 1.541180581e+09
# HELP node_vmstat_pswpin /proc/vmstat information field pswpin.
# TYPE node_vmstat_pswpin untyped
node_vmstat_pswpin 1476
# HELP node_vmstat_pswpout /proc/vmstat information field pswpout.
# TYPE node_vmstat_pswpout untyped
node_vmstat_pswpout 35045
2017-01-09 20:37:59 +01:00
# HELP node_wifi_interface_frequency_hertz The current frequency a WiFi interface is operating at, in hertz.
# TYPE node_wifi_interface_frequency_hertz gauge
node_wifi_interface_frequency_hertz{device="wlan0"} 2.412e+09
2017-03-20 17:25:01 +01:00
node_wifi_interface_frequency_hertz{device="wlan1"} 2.412e+09
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_beacon_loss_total The total number of times a station has detected a beacon loss.
# TYPE node_wifi_station_beacon_loss_total counter
2018-07-16 16:02:25 +02:00
node_wifi_station_beacon_loss_total{device="wlan0",mac_address="01:02:03:04:05:06"} 2
node_wifi_station_beacon_loss_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 1
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_connected_seconds_total The total number of seconds a station has been connected to an access point.
# TYPE node_wifi_station_connected_seconds_total counter
2018-07-16 16:02:25 +02:00
node_wifi_station_connected_seconds_total{device="wlan0",mac_address="01:02:03:04:05:06"} 60
node_wifi_station_connected_seconds_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 30
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_inactive_seconds The number of seconds since any wireless activity has occurred on a station.
# TYPE node_wifi_station_inactive_seconds gauge
2018-07-16 16:02:25 +02:00
node_wifi_station_inactive_seconds{device="wlan0",mac_address="01:02:03:04:05:06"} 0.8
node_wifi_station_inactive_seconds{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 0.4
2017-03-13 21:20:42 +01:00
# HELP node_wifi_station_info Labeled WiFi interface station information as provided by the operating system.
# TYPE node_wifi_station_info gauge
node_wifi_station_info{bssid="00:11:22:33:44:55",device="wlan0",mode="client",ssid="Example"} 1
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_receive_bits_per_second The current WiFi receive bitrate of a station, in bits per second.
# TYPE node_wifi_station_receive_bits_per_second gauge
2018-07-16 16:02:25 +02:00
node_wifi_station_receive_bits_per_second{device="wlan0",mac_address="01:02:03:04:05:06"} 2.56e+08
node_wifi_station_receive_bits_per_second{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 1.28e+08
2018-11-19 19:15:54 +01:00
# HELP node_wifi_station_receive_bytes_total The total number of bytes received by a WiFi station.
# TYPE node_wifi_station_receive_bytes_total counter
node_wifi_station_receive_bytes_total{device="wlan0",mac_address="01:02:03:04:05:06"} 0
node_wifi_station_receive_bytes_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 0
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_signal_dbm The current WiFi signal strength, in decibel-milliwatts (dBm).
# TYPE node_wifi_station_signal_dbm gauge
2018-07-16 16:02:25 +02:00
node_wifi_station_signal_dbm{device="wlan0",mac_address="01:02:03:04:05:06"} -26
node_wifi_station_signal_dbm{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} -52
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_transmit_bits_per_second The current WiFi transmit bitrate of a station, in bits per second.
# TYPE node_wifi_station_transmit_bits_per_second gauge
2018-07-16 16:02:25 +02:00
node_wifi_station_transmit_bits_per_second{device="wlan0",mac_address="01:02:03:04:05:06"} 3.28e+08
node_wifi_station_transmit_bits_per_second{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 1.64e+08
2018-11-19 19:15:54 +01:00
# HELP node_wifi_station_transmit_bytes_total The total number of bytes transmitted by a WiFi station.
# TYPE node_wifi_station_transmit_bytes_total counter
node_wifi_station_transmit_bytes_total{device="wlan0",mac_address="01:02:03:04:05:06"} 0
node_wifi_station_transmit_bytes_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 0
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_transmit_failed_total The total number of times a station has failed to send a packet.
# TYPE node_wifi_station_transmit_failed_total counter
2018-07-16 16:02:25 +02:00
node_wifi_station_transmit_failed_total{device="wlan0",mac_address="01:02:03:04:05:06"} 4
node_wifi_station_transmit_failed_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 2
2017-01-09 20:37:59 +01:00
# HELP node_wifi_station_transmit_retries_total The total number of times a station has had to retry while sending a packet.
# TYPE node_wifi_station_transmit_retries_total counter
2018-07-16 16:02:25 +02:00
node_wifi_station_transmit_retries_total{device="wlan0",mac_address="01:02:03:04:05:06"} 20
node_wifi_station_transmit_retries_total{device="wlan0",mac_address="aa:bb:cc:dd:ee:ff"} 10
2017-04-22 00:19:35 +02:00
# HELP node_xfs_allocation_btree_compares_total Number of allocation B-tree compares for a filesystem.
# TYPE node_xfs_allocation_btree_compares_total counter
node_xfs_allocation_btree_compares_total{device="sda1"} 0
# HELP node_xfs_allocation_btree_lookups_total Number of allocation B-tree lookups for a filesystem.
# TYPE node_xfs_allocation_btree_lookups_total counter
node_xfs_allocation_btree_lookups_total{device="sda1"} 0
# HELP node_xfs_allocation_btree_records_deleted_total Number of allocation B-tree records deleted for a filesystem.
# TYPE node_xfs_allocation_btree_records_deleted_total counter
node_xfs_allocation_btree_records_deleted_total{device="sda1"} 0
# HELP node_xfs_allocation_btree_records_inserted_total Number of allocation B-tree records inserted for a filesystem.
# TYPE node_xfs_allocation_btree_records_inserted_total counter
node_xfs_allocation_btree_records_inserted_total{device="sda1"} 0
2017-10-21 00:41:51 +02:00
# HELP node_xfs_block_map_btree_compares_total Number of block map B-tree compares for a filesystem.
# TYPE node_xfs_block_map_btree_compares_total counter
node_xfs_block_map_btree_compares_total{device="sda1"} 0
# HELP node_xfs_block_map_btree_lookups_total Number of block map B-tree lookups for a filesystem.
# TYPE node_xfs_block_map_btree_lookups_total counter
node_xfs_block_map_btree_lookups_total{device="sda1"} 0
# HELP node_xfs_block_map_btree_records_deleted_total Number of block map B-tree records deleted for a filesystem.
# TYPE node_xfs_block_map_btree_records_deleted_total counter
node_xfs_block_map_btree_records_deleted_total{device="sda1"} 0
# HELP node_xfs_block_map_btree_records_inserted_total Number of block map B-tree records inserted for a filesystem.
# TYPE node_xfs_block_map_btree_records_inserted_total counter
node_xfs_block_map_btree_records_inserted_total{device="sda1"} 0
2017-07-07 07:27:52 +02:00
# HELP node_xfs_block_mapping_extent_list_compares_total Number of extent list compares for a filesystem.
# TYPE node_xfs_block_mapping_extent_list_compares_total counter
node_xfs_block_mapping_extent_list_compares_total{device="sda1"} 0
# HELP node_xfs_block_mapping_extent_list_deletions_total Number of extent list deletions for a filesystem.
# TYPE node_xfs_block_mapping_extent_list_deletions_total counter
node_xfs_block_mapping_extent_list_deletions_total{device="sda1"} 1
# HELP node_xfs_block_mapping_extent_list_insertions_total Number of extent list insertions for a filesystem.
# TYPE node_xfs_block_mapping_extent_list_insertions_total counter
node_xfs_block_mapping_extent_list_insertions_total{device="sda1"} 1
# HELP node_xfs_block_mapping_extent_list_lookups_total Number of extent list lookups for a filesystem.
# TYPE node_xfs_block_mapping_extent_list_lookups_total counter
node_xfs_block_mapping_extent_list_lookups_total{device="sda1"} 91
# HELP node_xfs_block_mapping_reads_total Number of block map for read operations for a filesystem.
# TYPE node_xfs_block_mapping_reads_total counter
node_xfs_block_mapping_reads_total{device="sda1"} 61
# HELP node_xfs_block_mapping_unmaps_total Number of block unmaps (deletes) for a filesystem.
# TYPE node_xfs_block_mapping_unmaps_total counter
node_xfs_block_mapping_unmaps_total{device="sda1"} 1
# HELP node_xfs_block_mapping_writes_total Number of block map for write operations for a filesystem.
# TYPE node_xfs_block_mapping_writes_total counter
node_xfs_block_mapping_writes_total{device="sda1"} 29
2019-07-15 16:28:09 +02:00
# HELP node_xfs_directory_operation_create_total Number of times a new directory entry was created for a filesystem.
# TYPE node_xfs_directory_operation_create_total counter
node_xfs_directory_operation_create_total{device="sda1"} 2
# HELP node_xfs_directory_operation_getdents_total Number of times the directory getdents operation was performed for a filesystem.
# TYPE node_xfs_directory_operation_getdents_total counter
node_xfs_directory_operation_getdents_total{device="sda1"} 52
# HELP node_xfs_directory_operation_lookup_total Number of file name directory lookups which miss the operating systems directory name lookup cache.
# TYPE node_xfs_directory_operation_lookup_total counter
node_xfs_directory_operation_lookup_total{device="sda1"} 3
# HELP node_xfs_directory_operation_remove_total Number of times an existing directory entry was created for a filesystem.
# TYPE node_xfs_directory_operation_remove_total counter
node_xfs_directory_operation_remove_total{device="sda1"} 1
2017-04-22 00:19:35 +02:00
# HELP node_xfs_extent_allocation_blocks_allocated_total Number of blocks allocated for a filesystem.
# TYPE node_xfs_extent_allocation_blocks_allocated_total counter
node_xfs_extent_allocation_blocks_allocated_total{device="sda1"} 872
# HELP node_xfs_extent_allocation_blocks_freed_total Number of blocks freed for a filesystem.
# TYPE node_xfs_extent_allocation_blocks_freed_total counter
node_xfs_extent_allocation_blocks_freed_total{device="sda1"} 0
# HELP node_xfs_extent_allocation_extents_allocated_total Number of extents allocated for a filesystem.
# TYPE node_xfs_extent_allocation_extents_allocated_total counter
node_xfs_extent_allocation_extents_allocated_total{device="sda1"} 1
# HELP node_xfs_extent_allocation_extents_freed_total Number of extents freed for a filesystem.
# TYPE node_xfs_extent_allocation_extents_freed_total counter
node_xfs_extent_allocation_extents_freed_total{device="sda1"} 0
2019-07-15 16:28:09 +02:00
# HELP node_xfs_read_calls_total Number of read(2) system calls made to files in a filesystem.
# TYPE node_xfs_read_calls_total counter
node_xfs_read_calls_total{device="sda1"} 28
# HELP node_xfs_vnode_active_total Number of vnodes not on free lists for a filesystem.
# TYPE node_xfs_vnode_active_total counter
node_xfs_vnode_active_total{device="sda1"} 4
# HELP node_xfs_vnode_allocate_total Number of times vn_alloc called for a filesystem.
# TYPE node_xfs_vnode_allocate_total counter
node_xfs_vnode_allocate_total{device="sda1"} 0
# HELP node_xfs_vnode_get_total Number of times vn_get called for a filesystem.
# TYPE node_xfs_vnode_get_total counter
node_xfs_vnode_get_total{device="sda1"} 0
# HELP node_xfs_vnode_hold_total Number of times vn_hold called for a filesystem.
# TYPE node_xfs_vnode_hold_total counter
node_xfs_vnode_hold_total{device="sda1"} 0
# HELP node_xfs_vnode_reclaim_total Number of times vn_reclaim called for a filesystem.
# TYPE node_xfs_vnode_reclaim_total counter
node_xfs_vnode_reclaim_total{device="sda1"} 1
# HELP node_xfs_vnode_release_total Number of times vn_rele called for a filesystem.
# TYPE node_xfs_vnode_release_total counter
node_xfs_vnode_release_total{device="sda1"} 1
# HELP node_xfs_vnode_remove_total Number of times vn_remove called for a filesystem.
# TYPE node_xfs_vnode_remove_total counter
node_xfs_vnode_remove_total{device="sda1"} 1
# HELP node_xfs_write_calls_total Number of write(2) system calls made to files in a filesystem.
# TYPE node_xfs_write_calls_total counter
node_xfs_write_calls_total{device="sda1"} 0
2018-02-16 15:46:31 +01:00
# HELP node_zfs_abd_linear_cnt kstat.zfs.misc.abdstats.linear_cnt
# TYPE node_zfs_abd_linear_cnt untyped
node_zfs_abd_linear_cnt 62
# HELP node_zfs_abd_linear_data_size kstat.zfs.misc.abdstats.linear_data_size
# TYPE node_zfs_abd_linear_data_size untyped
node_zfs_abd_linear_data_size 223232
# HELP node_zfs_abd_scatter_chunk_waste kstat.zfs.misc.abdstats.scatter_chunk_waste
# TYPE node_zfs_abd_scatter_chunk_waste untyped
node_zfs_abd_scatter_chunk_waste 0
# HELP node_zfs_abd_scatter_cnt kstat.zfs.misc.abdstats.scatter_cnt
# TYPE node_zfs_abd_scatter_cnt untyped
node_zfs_abd_scatter_cnt 1
# HELP node_zfs_abd_scatter_data_size kstat.zfs.misc.abdstats.scatter_data_size
# TYPE node_zfs_abd_scatter_data_size untyped
node_zfs_abd_scatter_data_size 16384
# HELP node_zfs_abd_scatter_order_0 kstat.zfs.misc.abdstats.scatter_order_0
# TYPE node_zfs_abd_scatter_order_0 untyped
node_zfs_abd_scatter_order_0 0
# HELP node_zfs_abd_scatter_order_1 kstat.zfs.misc.abdstats.scatter_order_1
# TYPE node_zfs_abd_scatter_order_1 untyped
node_zfs_abd_scatter_order_1 0
# HELP node_zfs_abd_scatter_order_10 kstat.zfs.misc.abdstats.scatter_order_10
# TYPE node_zfs_abd_scatter_order_10 untyped
node_zfs_abd_scatter_order_10 0
# HELP node_zfs_abd_scatter_order_2 kstat.zfs.misc.abdstats.scatter_order_2
# TYPE node_zfs_abd_scatter_order_2 untyped
node_zfs_abd_scatter_order_2 1
# HELP node_zfs_abd_scatter_order_3 kstat.zfs.misc.abdstats.scatter_order_3
# TYPE node_zfs_abd_scatter_order_3 untyped
node_zfs_abd_scatter_order_3 0
# HELP node_zfs_abd_scatter_order_4 kstat.zfs.misc.abdstats.scatter_order_4
# TYPE node_zfs_abd_scatter_order_4 untyped
node_zfs_abd_scatter_order_4 0
# HELP node_zfs_abd_scatter_order_5 kstat.zfs.misc.abdstats.scatter_order_5
# TYPE node_zfs_abd_scatter_order_5 untyped
node_zfs_abd_scatter_order_5 0
# HELP node_zfs_abd_scatter_order_6 kstat.zfs.misc.abdstats.scatter_order_6
# TYPE node_zfs_abd_scatter_order_6 untyped
node_zfs_abd_scatter_order_6 0
# HELP node_zfs_abd_scatter_order_7 kstat.zfs.misc.abdstats.scatter_order_7
# TYPE node_zfs_abd_scatter_order_7 untyped
node_zfs_abd_scatter_order_7 0
# HELP node_zfs_abd_scatter_order_8 kstat.zfs.misc.abdstats.scatter_order_8
# TYPE node_zfs_abd_scatter_order_8 untyped
node_zfs_abd_scatter_order_8 0
# HELP node_zfs_abd_scatter_order_9 kstat.zfs.misc.abdstats.scatter_order_9
# TYPE node_zfs_abd_scatter_order_9 untyped
node_zfs_abd_scatter_order_9 0
# HELP node_zfs_abd_scatter_page_alloc_retry kstat.zfs.misc.abdstats.scatter_page_alloc_retry
# TYPE node_zfs_abd_scatter_page_alloc_retry untyped
node_zfs_abd_scatter_page_alloc_retry 0
# HELP node_zfs_abd_scatter_page_multi_chunk kstat.zfs.misc.abdstats.scatter_page_multi_chunk
# TYPE node_zfs_abd_scatter_page_multi_chunk untyped
node_zfs_abd_scatter_page_multi_chunk 0
# HELP node_zfs_abd_scatter_page_multi_zone kstat.zfs.misc.abdstats.scatter_page_multi_zone
# TYPE node_zfs_abd_scatter_page_multi_zone untyped
node_zfs_abd_scatter_page_multi_zone 0
# HELP node_zfs_abd_scatter_sg_table_retry kstat.zfs.misc.abdstats.scatter_sg_table_retry
# TYPE node_zfs_abd_scatter_sg_table_retry untyped
node_zfs_abd_scatter_sg_table_retry 0
# HELP node_zfs_abd_struct_size kstat.zfs.misc.abdstats.struct_size
# TYPE node_zfs_abd_struct_size untyped
node_zfs_abd_struct_size 2520
2017-01-29 22:59:01 +01:00
# HELP node_zfs_arc_anon_evictable_data kstat.zfs.misc.arcstats.anon_evictable_data
# TYPE node_zfs_arc_anon_evictable_data untyped
node_zfs_arc_anon_evictable_data 0
# HELP node_zfs_arc_anon_evictable_metadata kstat.zfs.misc.arcstats.anon_evictable_metadata
# TYPE node_zfs_arc_anon_evictable_metadata untyped
node_zfs_arc_anon_evictable_metadata 0
# HELP node_zfs_arc_anon_size kstat.zfs.misc.arcstats.anon_size
# TYPE node_zfs_arc_anon_size untyped
node_zfs_arc_anon_size 1.91744e+06
# HELP node_zfs_arc_arc_loaned_bytes kstat.zfs.misc.arcstats.arc_loaned_bytes
# TYPE node_zfs_arc_arc_loaned_bytes untyped
node_zfs_arc_arc_loaned_bytes 0
# HELP node_zfs_arc_arc_meta_limit kstat.zfs.misc.arcstats.arc_meta_limit
# TYPE node_zfs_arc_arc_meta_limit untyped
node_zfs_arc_arc_meta_limit 6.275982336e+09
# HELP node_zfs_arc_arc_meta_max kstat.zfs.misc.arcstats.arc_meta_max
# TYPE node_zfs_arc_arc_meta_max untyped
node_zfs_arc_arc_meta_max 4.49286096e+08
# HELP node_zfs_arc_arc_meta_min kstat.zfs.misc.arcstats.arc_meta_min
# TYPE node_zfs_arc_arc_meta_min untyped
node_zfs_arc_arc_meta_min 1.6777216e+07
# HELP node_zfs_arc_arc_meta_used kstat.zfs.misc.arcstats.arc_meta_used
# TYPE node_zfs_arc_arc_meta_used untyped
node_zfs_arc_arc_meta_used 3.08103632e+08
# HELP node_zfs_arc_arc_need_free kstat.zfs.misc.arcstats.arc_need_free
# TYPE node_zfs_arc_arc_need_free untyped
node_zfs_arc_arc_need_free 0
# HELP node_zfs_arc_arc_no_grow kstat.zfs.misc.arcstats.arc_no_grow
# TYPE node_zfs_arc_arc_no_grow untyped
node_zfs_arc_arc_no_grow 0
# HELP node_zfs_arc_arc_prune kstat.zfs.misc.arcstats.arc_prune
# TYPE node_zfs_arc_arc_prune untyped
node_zfs_arc_arc_prune 0
# HELP node_zfs_arc_arc_sys_free kstat.zfs.misc.arcstats.arc_sys_free
# TYPE node_zfs_arc_arc_sys_free untyped
node_zfs_arc_arc_sys_free 2.61496832e+08
# HELP node_zfs_arc_arc_tempreserve kstat.zfs.misc.arcstats.arc_tempreserve
# TYPE node_zfs_arc_arc_tempreserve untyped
node_zfs_arc_arc_tempreserve 0
# HELP node_zfs_arc_c kstat.zfs.misc.arcstats.c
# TYPE node_zfs_arc_c untyped
node_zfs_arc_c 1.643208777e+09
# HELP node_zfs_arc_c_max kstat.zfs.misc.arcstats.c_max
# TYPE node_zfs_arc_c_max untyped
node_zfs_arc_c_max 8.367976448e+09
# HELP node_zfs_arc_c_min kstat.zfs.misc.arcstats.c_min
# TYPE node_zfs_arc_c_min untyped
node_zfs_arc_c_min 3.3554432e+07
# HELP node_zfs_arc_data_size kstat.zfs.misc.arcstats.data_size
# TYPE node_zfs_arc_data_size untyped
node_zfs_arc_data_size 1.29583616e+09
# HELP node_zfs_arc_deleted kstat.zfs.misc.arcstats.deleted
# TYPE node_zfs_arc_deleted untyped
node_zfs_arc_deleted 60403
# HELP node_zfs_arc_demand_data_hits kstat.zfs.misc.arcstats.demand_data_hits
# TYPE node_zfs_arc_demand_data_hits untyped
node_zfs_arc_demand_data_hits 7.221032e+06
# HELP node_zfs_arc_demand_data_misses kstat.zfs.misc.arcstats.demand_data_misses
# TYPE node_zfs_arc_demand_data_misses untyped
node_zfs_arc_demand_data_misses 73300
# HELP node_zfs_arc_demand_metadata_hits kstat.zfs.misc.arcstats.demand_metadata_hits
# TYPE node_zfs_arc_demand_metadata_hits untyped
node_zfs_arc_demand_metadata_hits 1.464353e+06
# HELP node_zfs_arc_demand_metadata_misses kstat.zfs.misc.arcstats.demand_metadata_misses
# TYPE node_zfs_arc_demand_metadata_misses untyped
node_zfs_arc_demand_metadata_misses 498170
# HELP node_zfs_arc_duplicate_buffers kstat.zfs.misc.arcstats.duplicate_buffers
# TYPE node_zfs_arc_duplicate_buffers untyped
node_zfs_arc_duplicate_buffers 0
# HELP node_zfs_arc_duplicate_buffers_size kstat.zfs.misc.arcstats.duplicate_buffers_size
# TYPE node_zfs_arc_duplicate_buffers_size untyped
node_zfs_arc_duplicate_buffers_size 0
# HELP node_zfs_arc_duplicate_reads kstat.zfs.misc.arcstats.duplicate_reads
# TYPE node_zfs_arc_duplicate_reads untyped
node_zfs_arc_duplicate_reads 0
# HELP node_zfs_arc_evict_l2_cached kstat.zfs.misc.arcstats.evict_l2_cached
# TYPE node_zfs_arc_evict_l2_cached untyped
node_zfs_arc_evict_l2_cached 0
# HELP node_zfs_arc_evict_l2_eligible kstat.zfs.misc.arcstats.evict_l2_eligible
# TYPE node_zfs_arc_evict_l2_eligible untyped
node_zfs_arc_evict_l2_eligible 8.99251456e+09
# HELP node_zfs_arc_evict_l2_ineligible kstat.zfs.misc.arcstats.evict_l2_ineligible
# TYPE node_zfs_arc_evict_l2_ineligible untyped
node_zfs_arc_evict_l2_ineligible 9.92552448e+08
# HELP node_zfs_arc_evict_l2_skip kstat.zfs.misc.arcstats.evict_l2_skip
# TYPE node_zfs_arc_evict_l2_skip untyped
node_zfs_arc_evict_l2_skip 0
# HELP node_zfs_arc_evict_not_enough kstat.zfs.misc.arcstats.evict_not_enough
# TYPE node_zfs_arc_evict_not_enough untyped
node_zfs_arc_evict_not_enough 680
# HELP node_zfs_arc_evict_skip kstat.zfs.misc.arcstats.evict_skip
# TYPE node_zfs_arc_evict_skip untyped
node_zfs_arc_evict_skip 2.265729e+06
# HELP node_zfs_arc_hash_chain_max kstat.zfs.misc.arcstats.hash_chain_max
# TYPE node_zfs_arc_hash_chain_max untyped
node_zfs_arc_hash_chain_max 3
# HELP node_zfs_arc_hash_chains kstat.zfs.misc.arcstats.hash_chains
# TYPE node_zfs_arc_hash_chains untyped
node_zfs_arc_hash_chains 412
# HELP node_zfs_arc_hash_collisions kstat.zfs.misc.arcstats.hash_collisions
# TYPE node_zfs_arc_hash_collisions untyped
node_zfs_arc_hash_collisions 50564
# HELP node_zfs_arc_hash_elements kstat.zfs.misc.arcstats.hash_elements
# TYPE node_zfs_arc_hash_elements untyped
node_zfs_arc_hash_elements 42359
# HELP node_zfs_arc_hash_elements_max kstat.zfs.misc.arcstats.hash_elements_max
# TYPE node_zfs_arc_hash_elements_max untyped
node_zfs_arc_hash_elements_max 88245
# HELP node_zfs_arc_hdr_size kstat.zfs.misc.arcstats.hdr_size
# TYPE node_zfs_arc_hdr_size untyped
node_zfs_arc_hdr_size 1.636108e+07
# HELP node_zfs_arc_hits kstat.zfs.misc.arcstats.hits
# TYPE node_zfs_arc_hits untyped
node_zfs_arc_hits 8.772612e+06
# HELP node_zfs_arc_l2_abort_lowmem kstat.zfs.misc.arcstats.l2_abort_lowmem
# TYPE node_zfs_arc_l2_abort_lowmem untyped
node_zfs_arc_l2_abort_lowmem 0
# HELP node_zfs_arc_l2_asize kstat.zfs.misc.arcstats.l2_asize
# TYPE node_zfs_arc_l2_asize untyped
node_zfs_arc_l2_asize 0
# HELP node_zfs_arc_l2_cdata_free_on_write kstat.zfs.misc.arcstats.l2_cdata_free_on_write
# TYPE node_zfs_arc_l2_cdata_free_on_write untyped
node_zfs_arc_l2_cdata_free_on_write 0
# HELP node_zfs_arc_l2_cksum_bad kstat.zfs.misc.arcstats.l2_cksum_bad
# TYPE node_zfs_arc_l2_cksum_bad untyped
node_zfs_arc_l2_cksum_bad 0
# HELP node_zfs_arc_l2_compress_failures kstat.zfs.misc.arcstats.l2_compress_failures
# TYPE node_zfs_arc_l2_compress_failures untyped
node_zfs_arc_l2_compress_failures 0
# HELP node_zfs_arc_l2_compress_successes kstat.zfs.misc.arcstats.l2_compress_successes
# TYPE node_zfs_arc_l2_compress_successes untyped
node_zfs_arc_l2_compress_successes 0
# HELP node_zfs_arc_l2_compress_zeros kstat.zfs.misc.arcstats.l2_compress_zeros
# TYPE node_zfs_arc_l2_compress_zeros untyped
node_zfs_arc_l2_compress_zeros 0
# HELP node_zfs_arc_l2_evict_l1cached kstat.zfs.misc.arcstats.l2_evict_l1cached
# TYPE node_zfs_arc_l2_evict_l1cached untyped
node_zfs_arc_l2_evict_l1cached 0
# HELP node_zfs_arc_l2_evict_lock_retry kstat.zfs.misc.arcstats.l2_evict_lock_retry
# TYPE node_zfs_arc_l2_evict_lock_retry untyped
node_zfs_arc_l2_evict_lock_retry 0
# HELP node_zfs_arc_l2_evict_reading kstat.zfs.misc.arcstats.l2_evict_reading
# TYPE node_zfs_arc_l2_evict_reading untyped
node_zfs_arc_l2_evict_reading 0
# HELP node_zfs_arc_l2_feeds kstat.zfs.misc.arcstats.l2_feeds
# TYPE node_zfs_arc_l2_feeds untyped
node_zfs_arc_l2_feeds 0
# HELP node_zfs_arc_l2_free_on_write kstat.zfs.misc.arcstats.l2_free_on_write
# TYPE node_zfs_arc_l2_free_on_write untyped
node_zfs_arc_l2_free_on_write 0
# HELP node_zfs_arc_l2_hdr_size kstat.zfs.misc.arcstats.l2_hdr_size
# TYPE node_zfs_arc_l2_hdr_size untyped
node_zfs_arc_l2_hdr_size 0
# HELP node_zfs_arc_l2_hits kstat.zfs.misc.arcstats.l2_hits
# TYPE node_zfs_arc_l2_hits untyped
node_zfs_arc_l2_hits 0
# HELP node_zfs_arc_l2_io_error kstat.zfs.misc.arcstats.l2_io_error
# TYPE node_zfs_arc_l2_io_error untyped
node_zfs_arc_l2_io_error 0
# HELP node_zfs_arc_l2_misses kstat.zfs.misc.arcstats.l2_misses
# TYPE node_zfs_arc_l2_misses untyped
node_zfs_arc_l2_misses 0
# HELP node_zfs_arc_l2_read_bytes kstat.zfs.misc.arcstats.l2_read_bytes
# TYPE node_zfs_arc_l2_read_bytes untyped
node_zfs_arc_l2_read_bytes 0
# HELP node_zfs_arc_l2_rw_clash kstat.zfs.misc.arcstats.l2_rw_clash
# TYPE node_zfs_arc_l2_rw_clash untyped
node_zfs_arc_l2_rw_clash 0
# HELP node_zfs_arc_l2_size kstat.zfs.misc.arcstats.l2_size
# TYPE node_zfs_arc_l2_size untyped
node_zfs_arc_l2_size 0
# HELP node_zfs_arc_l2_write_bytes kstat.zfs.misc.arcstats.l2_write_bytes
# TYPE node_zfs_arc_l2_write_bytes untyped
node_zfs_arc_l2_write_bytes 0
# HELP node_zfs_arc_l2_writes_done kstat.zfs.misc.arcstats.l2_writes_done
# TYPE node_zfs_arc_l2_writes_done untyped
node_zfs_arc_l2_writes_done 0
# HELP node_zfs_arc_l2_writes_error kstat.zfs.misc.arcstats.l2_writes_error
# TYPE node_zfs_arc_l2_writes_error untyped
node_zfs_arc_l2_writes_error 0
# HELP node_zfs_arc_l2_writes_lock_retry kstat.zfs.misc.arcstats.l2_writes_lock_retry
# TYPE node_zfs_arc_l2_writes_lock_retry untyped
node_zfs_arc_l2_writes_lock_retry 0
# HELP node_zfs_arc_l2_writes_sent kstat.zfs.misc.arcstats.l2_writes_sent
# TYPE node_zfs_arc_l2_writes_sent untyped
node_zfs_arc_l2_writes_sent 0
# HELP node_zfs_arc_memory_direct_count kstat.zfs.misc.arcstats.memory_direct_count
# TYPE node_zfs_arc_memory_direct_count untyped
node_zfs_arc_memory_direct_count 542
# HELP node_zfs_arc_memory_indirect_count kstat.zfs.misc.arcstats.memory_indirect_count
# TYPE node_zfs_arc_memory_indirect_count untyped
node_zfs_arc_memory_indirect_count 3006
# HELP node_zfs_arc_memory_throttle_count kstat.zfs.misc.arcstats.memory_throttle_count
# TYPE node_zfs_arc_memory_throttle_count untyped
node_zfs_arc_memory_throttle_count 0
# HELP node_zfs_arc_metadata_size kstat.zfs.misc.arcstats.metadata_size
# TYPE node_zfs_arc_metadata_size untyped
node_zfs_arc_metadata_size 1.7529856e+08
# HELP node_zfs_arc_mfu_evictable_data kstat.zfs.misc.arcstats.mfu_evictable_data
# TYPE node_zfs_arc_mfu_evictable_data untyped
node_zfs_arc_mfu_evictable_data 1.017613824e+09
# HELP node_zfs_arc_mfu_evictable_metadata kstat.zfs.misc.arcstats.mfu_evictable_metadata
# TYPE node_zfs_arc_mfu_evictable_metadata untyped
node_zfs_arc_mfu_evictable_metadata 9.163776e+06
# HELP node_zfs_arc_mfu_ghost_evictable_data kstat.zfs.misc.arcstats.mfu_ghost_evictable_data
# TYPE node_zfs_arc_mfu_ghost_evictable_data untyped
node_zfs_arc_mfu_ghost_evictable_data 9.6731136e+07
# HELP node_zfs_arc_mfu_ghost_evictable_metadata kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata
# TYPE node_zfs_arc_mfu_ghost_evictable_metadata untyped
node_zfs_arc_mfu_ghost_evictable_metadata 8.205312e+06
# HELP node_zfs_arc_mfu_ghost_hits kstat.zfs.misc.arcstats.mfu_ghost_hits
# TYPE node_zfs_arc_mfu_ghost_hits untyped
node_zfs_arc_mfu_ghost_hits 821
# HELP node_zfs_arc_mfu_ghost_size kstat.zfs.misc.arcstats.mfu_ghost_size
# TYPE node_zfs_arc_mfu_ghost_size untyped
node_zfs_arc_mfu_ghost_size 1.04936448e+08
# HELP node_zfs_arc_mfu_hits kstat.zfs.misc.arcstats.mfu_hits
# TYPE node_zfs_arc_mfu_hits untyped
node_zfs_arc_mfu_hits 7.829854e+06
# HELP node_zfs_arc_mfu_size kstat.zfs.misc.arcstats.mfu_size
# TYPE node_zfs_arc_mfu_size untyped
node_zfs_arc_mfu_size 1.066623488e+09
# HELP node_zfs_arc_misses kstat.zfs.misc.arcstats.misses
# TYPE node_zfs_arc_misses untyped
node_zfs_arc_misses 604635
# HELP node_zfs_arc_mru_evictable_data kstat.zfs.misc.arcstats.mru_evictable_data
# TYPE node_zfs_arc_mru_evictable_data untyped
node_zfs_arc_mru_evictable_data 2.78091264e+08
# HELP node_zfs_arc_mru_evictable_metadata kstat.zfs.misc.arcstats.mru_evictable_metadata
# TYPE node_zfs_arc_mru_evictable_metadata untyped
node_zfs_arc_mru_evictable_metadata 1.8606592e+07
# HELP node_zfs_arc_mru_ghost_evictable_data kstat.zfs.misc.arcstats.mru_ghost_evictable_data
# TYPE node_zfs_arc_mru_ghost_evictable_data untyped
node_zfs_arc_mru_ghost_evictable_data 8.83765248e+08
# HELP node_zfs_arc_mru_ghost_evictable_metadata kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata
# TYPE node_zfs_arc_mru_ghost_evictable_metadata untyped
node_zfs_arc_mru_ghost_evictable_metadata 1.1596288e+08
# HELP node_zfs_arc_mru_ghost_hits kstat.zfs.misc.arcstats.mru_ghost_hits
# TYPE node_zfs_arc_mru_ghost_hits untyped
node_zfs_arc_mru_ghost_hits 21100
# HELP node_zfs_arc_mru_ghost_size kstat.zfs.misc.arcstats.mru_ghost_size
# TYPE node_zfs_arc_mru_ghost_size untyped
node_zfs_arc_mru_ghost_size 9.99728128e+08
# HELP node_zfs_arc_mru_hits kstat.zfs.misc.arcstats.mru_hits
# TYPE node_zfs_arc_mru_hits untyped
node_zfs_arc_mru_hits 855535
# HELP node_zfs_arc_mru_size kstat.zfs.misc.arcstats.mru_size
# TYPE node_zfs_arc_mru_size untyped
node_zfs_arc_mru_size 4.02593792e+08
# HELP node_zfs_arc_mutex_miss kstat.zfs.misc.arcstats.mutex_miss
# TYPE node_zfs_arc_mutex_miss untyped
node_zfs_arc_mutex_miss 2
# HELP node_zfs_arc_other_size kstat.zfs.misc.arcstats.other_size
# TYPE node_zfs_arc_other_size untyped
node_zfs_arc_other_size 1.16443992e+08
# HELP node_zfs_arc_p kstat.zfs.misc.arcstats.p
# TYPE node_zfs_arc_p untyped
node_zfs_arc_p 5.16395305e+08
# HELP node_zfs_arc_prefetch_data_hits kstat.zfs.misc.arcstats.prefetch_data_hits
# TYPE node_zfs_arc_prefetch_data_hits untyped
node_zfs_arc_prefetch_data_hits 3615
# HELP node_zfs_arc_prefetch_data_misses kstat.zfs.misc.arcstats.prefetch_data_misses
# TYPE node_zfs_arc_prefetch_data_misses untyped
node_zfs_arc_prefetch_data_misses 17094
# HELP node_zfs_arc_prefetch_metadata_hits kstat.zfs.misc.arcstats.prefetch_metadata_hits
# TYPE node_zfs_arc_prefetch_metadata_hits untyped
node_zfs_arc_prefetch_metadata_hits 83612
# HELP node_zfs_arc_prefetch_metadata_misses kstat.zfs.misc.arcstats.prefetch_metadata_misses
# TYPE node_zfs_arc_prefetch_metadata_misses untyped
node_zfs_arc_prefetch_metadata_misses 16071
# HELP node_zfs_arc_size kstat.zfs.misc.arcstats.size
# TYPE node_zfs_arc_size untyped
node_zfs_arc_size 1.603939792e+09
2018-02-16 15:46:31 +01:00
# HELP node_zfs_dbuf_dbuf_cache_count kstat.zfs.misc.dbuf_stats.dbuf_cache_count
# TYPE node_zfs_dbuf_dbuf_cache_count untyped
node_zfs_dbuf_dbuf_cache_count 27
# HELP node_zfs_dbuf_dbuf_cache_hiwater_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_hiwater_bytes
# TYPE node_zfs_dbuf_dbuf_cache_hiwater_bytes untyped
node_zfs_dbuf_dbuf_cache_hiwater_bytes 6.9117804e+07
# HELP node_zfs_dbuf_dbuf_cache_level_0 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_0
# TYPE node_zfs_dbuf_dbuf_cache_level_0 untyped
node_zfs_dbuf_dbuf_cache_level_0 27
# HELP node_zfs_dbuf_dbuf_cache_level_0_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_0_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_0_bytes untyped
node_zfs_dbuf_dbuf_cache_level_0_bytes 302080
# HELP node_zfs_dbuf_dbuf_cache_level_1 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_1
# TYPE node_zfs_dbuf_dbuf_cache_level_1 untyped
node_zfs_dbuf_dbuf_cache_level_1 0
# HELP node_zfs_dbuf_dbuf_cache_level_10 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_10
# TYPE node_zfs_dbuf_dbuf_cache_level_10 untyped
node_zfs_dbuf_dbuf_cache_level_10 0
# HELP node_zfs_dbuf_dbuf_cache_level_10_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_10_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_10_bytes untyped
node_zfs_dbuf_dbuf_cache_level_10_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_11 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_11
# TYPE node_zfs_dbuf_dbuf_cache_level_11 untyped
node_zfs_dbuf_dbuf_cache_level_11 0
# HELP node_zfs_dbuf_dbuf_cache_level_11_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_11_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_11_bytes untyped
node_zfs_dbuf_dbuf_cache_level_11_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_1_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_1_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_1_bytes untyped
node_zfs_dbuf_dbuf_cache_level_1_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_2 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_2
# TYPE node_zfs_dbuf_dbuf_cache_level_2 untyped
node_zfs_dbuf_dbuf_cache_level_2 0
# HELP node_zfs_dbuf_dbuf_cache_level_2_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_2_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_2_bytes untyped
node_zfs_dbuf_dbuf_cache_level_2_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_3 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_3
# TYPE node_zfs_dbuf_dbuf_cache_level_3 untyped
node_zfs_dbuf_dbuf_cache_level_3 0
# HELP node_zfs_dbuf_dbuf_cache_level_3_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_3_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_3_bytes untyped
node_zfs_dbuf_dbuf_cache_level_3_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_4 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_4
# TYPE node_zfs_dbuf_dbuf_cache_level_4 untyped
node_zfs_dbuf_dbuf_cache_level_4 0
# HELP node_zfs_dbuf_dbuf_cache_level_4_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_4_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_4_bytes untyped
node_zfs_dbuf_dbuf_cache_level_4_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_5 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_5
# TYPE node_zfs_dbuf_dbuf_cache_level_5 untyped
node_zfs_dbuf_dbuf_cache_level_5 0
# HELP node_zfs_dbuf_dbuf_cache_level_5_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_5_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_5_bytes untyped
node_zfs_dbuf_dbuf_cache_level_5_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_6 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_6
# TYPE node_zfs_dbuf_dbuf_cache_level_6 untyped
node_zfs_dbuf_dbuf_cache_level_6 0
# HELP node_zfs_dbuf_dbuf_cache_level_6_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_6_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_6_bytes untyped
node_zfs_dbuf_dbuf_cache_level_6_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_7 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_7
# TYPE node_zfs_dbuf_dbuf_cache_level_7 untyped
node_zfs_dbuf_dbuf_cache_level_7 0
# HELP node_zfs_dbuf_dbuf_cache_level_7_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_7_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_7_bytes untyped
node_zfs_dbuf_dbuf_cache_level_7_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_8 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_8
# TYPE node_zfs_dbuf_dbuf_cache_level_8 untyped
node_zfs_dbuf_dbuf_cache_level_8 0
# HELP node_zfs_dbuf_dbuf_cache_level_8_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_8_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_8_bytes untyped
node_zfs_dbuf_dbuf_cache_level_8_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_level_9 kstat.zfs.misc.dbuf_stats.dbuf_cache_level_9
# TYPE node_zfs_dbuf_dbuf_cache_level_9 untyped
node_zfs_dbuf_dbuf_cache_level_9 0
# HELP node_zfs_dbuf_dbuf_cache_level_9_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_level_9_bytes
# TYPE node_zfs_dbuf_dbuf_cache_level_9_bytes untyped
node_zfs_dbuf_dbuf_cache_level_9_bytes 0
# HELP node_zfs_dbuf_dbuf_cache_lowater_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_lowater_bytes
# TYPE node_zfs_dbuf_dbuf_cache_lowater_bytes untyped
node_zfs_dbuf_dbuf_cache_lowater_bytes 5.6550932e+07
# HELP node_zfs_dbuf_dbuf_cache_max_bytes kstat.zfs.misc.dbuf_stats.dbuf_cache_max_bytes
# TYPE node_zfs_dbuf_dbuf_cache_max_bytes untyped
node_zfs_dbuf_dbuf_cache_max_bytes 6.2834368e+07
# HELP node_zfs_dbuf_dbuf_cache_size kstat.zfs.misc.dbuf_stats.dbuf_cache_size
# TYPE node_zfs_dbuf_dbuf_cache_size untyped
node_zfs_dbuf_dbuf_cache_size 302080
# HELP node_zfs_dbuf_dbuf_cache_size_max kstat.zfs.misc.dbuf_stats.dbuf_cache_size_max
# TYPE node_zfs_dbuf_dbuf_cache_size_max untyped
node_zfs_dbuf_dbuf_cache_size_max 394240
# HELP node_zfs_dbuf_dbuf_cache_total_evicts kstat.zfs.misc.dbuf_stats.dbuf_cache_total_evicts
# TYPE node_zfs_dbuf_dbuf_cache_total_evicts untyped
node_zfs_dbuf_dbuf_cache_total_evicts 0
# HELP node_zfs_dbuf_hash_chain_max kstat.zfs.misc.dbuf_stats.hash_chain_max
# TYPE node_zfs_dbuf_hash_chain_max untyped
node_zfs_dbuf_hash_chain_max 0
# HELP node_zfs_dbuf_hash_chains kstat.zfs.misc.dbuf_stats.hash_chains
# TYPE node_zfs_dbuf_hash_chains untyped
node_zfs_dbuf_hash_chains 0
# HELP node_zfs_dbuf_hash_collisions kstat.zfs.misc.dbuf_stats.hash_collisions
# TYPE node_zfs_dbuf_hash_collisions untyped
node_zfs_dbuf_hash_collisions 0
# HELP node_zfs_dbuf_hash_dbuf_level_0 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_0
# TYPE node_zfs_dbuf_hash_dbuf_level_0 untyped
node_zfs_dbuf_hash_dbuf_level_0 37
# HELP node_zfs_dbuf_hash_dbuf_level_0_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_0_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_0_bytes untyped
node_zfs_dbuf_hash_dbuf_level_0_bytes 465920
# HELP node_zfs_dbuf_hash_dbuf_level_1 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_1
# TYPE node_zfs_dbuf_hash_dbuf_level_1 untyped
node_zfs_dbuf_hash_dbuf_level_1 10
# HELP node_zfs_dbuf_hash_dbuf_level_10 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_10
# TYPE node_zfs_dbuf_hash_dbuf_level_10 untyped
node_zfs_dbuf_hash_dbuf_level_10 0
# HELP node_zfs_dbuf_hash_dbuf_level_10_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_10_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_10_bytes untyped
node_zfs_dbuf_hash_dbuf_level_10_bytes 0
# HELP node_zfs_dbuf_hash_dbuf_level_11 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_11
# TYPE node_zfs_dbuf_hash_dbuf_level_11 untyped
node_zfs_dbuf_hash_dbuf_level_11 0
# HELP node_zfs_dbuf_hash_dbuf_level_11_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_11_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_11_bytes untyped
node_zfs_dbuf_hash_dbuf_level_11_bytes 0
# HELP node_zfs_dbuf_hash_dbuf_level_1_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_1_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_1_bytes untyped
node_zfs_dbuf_hash_dbuf_level_1_bytes 1.31072e+06
# HELP node_zfs_dbuf_hash_dbuf_level_2 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_2
# TYPE node_zfs_dbuf_hash_dbuf_level_2 untyped
node_zfs_dbuf_hash_dbuf_level_2 2
# HELP node_zfs_dbuf_hash_dbuf_level_2_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_2_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_2_bytes untyped
node_zfs_dbuf_hash_dbuf_level_2_bytes 262144
# HELP node_zfs_dbuf_hash_dbuf_level_3 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_3
# TYPE node_zfs_dbuf_hash_dbuf_level_3 untyped
node_zfs_dbuf_hash_dbuf_level_3 2
# HELP node_zfs_dbuf_hash_dbuf_level_3_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_3_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_3_bytes untyped
node_zfs_dbuf_hash_dbuf_level_3_bytes 262144
# HELP node_zfs_dbuf_hash_dbuf_level_4 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_4
# TYPE node_zfs_dbuf_hash_dbuf_level_4 untyped
node_zfs_dbuf_hash_dbuf_level_4 2
# HELP node_zfs_dbuf_hash_dbuf_level_4_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_4_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_4_bytes untyped
node_zfs_dbuf_hash_dbuf_level_4_bytes 262144
# HELP node_zfs_dbuf_hash_dbuf_level_5 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_5
# TYPE node_zfs_dbuf_hash_dbuf_level_5 untyped
node_zfs_dbuf_hash_dbuf_level_5 2
# HELP node_zfs_dbuf_hash_dbuf_level_5_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_5_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_5_bytes untyped
node_zfs_dbuf_hash_dbuf_level_5_bytes 262144
# HELP node_zfs_dbuf_hash_dbuf_level_6 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_6
# TYPE node_zfs_dbuf_hash_dbuf_level_6 untyped
node_zfs_dbuf_hash_dbuf_level_6 0
# HELP node_zfs_dbuf_hash_dbuf_level_6_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_6_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_6_bytes untyped
node_zfs_dbuf_hash_dbuf_level_6_bytes 0
# HELP node_zfs_dbuf_hash_dbuf_level_7 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_7
# TYPE node_zfs_dbuf_hash_dbuf_level_7 untyped
node_zfs_dbuf_hash_dbuf_level_7 0
# HELP node_zfs_dbuf_hash_dbuf_level_7_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_7_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_7_bytes untyped
node_zfs_dbuf_hash_dbuf_level_7_bytes 0
# HELP node_zfs_dbuf_hash_dbuf_level_8 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_8
# TYPE node_zfs_dbuf_hash_dbuf_level_8 untyped
node_zfs_dbuf_hash_dbuf_level_8 0
# HELP node_zfs_dbuf_hash_dbuf_level_8_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_8_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_8_bytes untyped
node_zfs_dbuf_hash_dbuf_level_8_bytes 0
# HELP node_zfs_dbuf_hash_dbuf_level_9 kstat.zfs.misc.dbuf_stats.hash_dbuf_level_9
# TYPE node_zfs_dbuf_hash_dbuf_level_9 untyped
node_zfs_dbuf_hash_dbuf_level_9 0
# HELP node_zfs_dbuf_hash_dbuf_level_9_bytes kstat.zfs.misc.dbuf_stats.hash_dbuf_level_9_bytes
# TYPE node_zfs_dbuf_hash_dbuf_level_9_bytes untyped
node_zfs_dbuf_hash_dbuf_level_9_bytes 0
# HELP node_zfs_dbuf_hash_elements kstat.zfs.misc.dbuf_stats.hash_elements
# TYPE node_zfs_dbuf_hash_elements untyped
node_zfs_dbuf_hash_elements 55
# HELP node_zfs_dbuf_hash_elements_max kstat.zfs.misc.dbuf_stats.hash_elements_max
# TYPE node_zfs_dbuf_hash_elements_max untyped
node_zfs_dbuf_hash_elements_max 55
# HELP node_zfs_dbuf_hash_hits kstat.zfs.misc.dbuf_stats.hash_hits
# TYPE node_zfs_dbuf_hash_hits untyped
node_zfs_dbuf_hash_hits 108807
# HELP node_zfs_dbuf_hash_insert_race kstat.zfs.misc.dbuf_stats.hash_insert_race
# TYPE node_zfs_dbuf_hash_insert_race untyped
node_zfs_dbuf_hash_insert_race 0
# HELP node_zfs_dbuf_hash_misses kstat.zfs.misc.dbuf_stats.hash_misses
# TYPE node_zfs_dbuf_hash_misses untyped
node_zfs_dbuf_hash_misses 1851
2017-01-29 22:59:01 +01:00
# HELP node_zfs_dmu_tx_dmu_tx_assigned kstat.zfs.misc.dmu_tx.dmu_tx_assigned
# TYPE node_zfs_dmu_tx_dmu_tx_assigned untyped
node_zfs_dmu_tx_dmu_tx_assigned 3.532844e+06
# HELP node_zfs_dmu_tx_dmu_tx_delay kstat.zfs.misc.dmu_tx.dmu_tx_delay
# TYPE node_zfs_dmu_tx_dmu_tx_delay untyped
node_zfs_dmu_tx_dmu_tx_delay 0
# HELP node_zfs_dmu_tx_dmu_tx_dirty_delay kstat.zfs.misc.dmu_tx.dmu_tx_dirty_delay
# TYPE node_zfs_dmu_tx_dmu_tx_dirty_delay untyped
node_zfs_dmu_tx_dmu_tx_dirty_delay 0
# HELP node_zfs_dmu_tx_dmu_tx_dirty_over_max kstat.zfs.misc.dmu_tx.dmu_tx_dirty_over_max
# TYPE node_zfs_dmu_tx_dmu_tx_dirty_over_max untyped
node_zfs_dmu_tx_dmu_tx_dirty_over_max 0
# HELP node_zfs_dmu_tx_dmu_tx_dirty_throttle kstat.zfs.misc.dmu_tx.dmu_tx_dirty_throttle
# TYPE node_zfs_dmu_tx_dmu_tx_dirty_throttle untyped
node_zfs_dmu_tx_dmu_tx_dirty_throttle 0
# HELP node_zfs_dmu_tx_dmu_tx_error kstat.zfs.misc.dmu_tx.dmu_tx_error
# TYPE node_zfs_dmu_tx_dmu_tx_error untyped
node_zfs_dmu_tx_dmu_tx_error 0
# HELP node_zfs_dmu_tx_dmu_tx_group kstat.zfs.misc.dmu_tx.dmu_tx_group
# TYPE node_zfs_dmu_tx_dmu_tx_group untyped
node_zfs_dmu_tx_dmu_tx_group 0
# HELP node_zfs_dmu_tx_dmu_tx_memory_reclaim kstat.zfs.misc.dmu_tx.dmu_tx_memory_reclaim
# TYPE node_zfs_dmu_tx_dmu_tx_memory_reclaim untyped
node_zfs_dmu_tx_dmu_tx_memory_reclaim 0
# HELP node_zfs_dmu_tx_dmu_tx_memory_reserve kstat.zfs.misc.dmu_tx.dmu_tx_memory_reserve
# TYPE node_zfs_dmu_tx_dmu_tx_memory_reserve untyped
node_zfs_dmu_tx_dmu_tx_memory_reserve 0
# HELP node_zfs_dmu_tx_dmu_tx_quota kstat.zfs.misc.dmu_tx.dmu_tx_quota
# TYPE node_zfs_dmu_tx_dmu_tx_quota untyped
node_zfs_dmu_tx_dmu_tx_quota 0
# HELP node_zfs_dmu_tx_dmu_tx_suspended kstat.zfs.misc.dmu_tx.dmu_tx_suspended
# TYPE node_zfs_dmu_tx_dmu_tx_suspended untyped
node_zfs_dmu_tx_dmu_tx_suspended 0
2018-02-16 15:46:31 +01:00
# HELP node_zfs_dnode_dnode_alloc_next_block kstat.zfs.misc.dnodestats.dnode_alloc_next_block
# TYPE node_zfs_dnode_dnode_alloc_next_block untyped
node_zfs_dnode_dnode_alloc_next_block 0
# HELP node_zfs_dnode_dnode_alloc_next_chunk kstat.zfs.misc.dnodestats.dnode_alloc_next_chunk
# TYPE node_zfs_dnode_dnode_alloc_next_chunk untyped
node_zfs_dnode_dnode_alloc_next_chunk 0
# HELP node_zfs_dnode_dnode_alloc_race kstat.zfs.misc.dnodestats.dnode_alloc_race
# TYPE node_zfs_dnode_dnode_alloc_race untyped
node_zfs_dnode_dnode_alloc_race 0
# HELP node_zfs_dnode_dnode_allocate kstat.zfs.misc.dnodestats.dnode_allocate
# TYPE node_zfs_dnode_dnode_allocate untyped
node_zfs_dnode_dnode_allocate 0
# HELP node_zfs_dnode_dnode_buf_evict kstat.zfs.misc.dnodestats.dnode_buf_evict
# TYPE node_zfs_dnode_dnode_buf_evict untyped
node_zfs_dnode_dnode_buf_evict 17
# HELP node_zfs_dnode_dnode_hold_alloc_hits kstat.zfs.misc.dnodestats.dnode_hold_alloc_hits
# TYPE node_zfs_dnode_dnode_hold_alloc_hits untyped
node_zfs_dnode_dnode_hold_alloc_hits 37617
# HELP node_zfs_dnode_dnode_hold_alloc_interior kstat.zfs.misc.dnodestats.dnode_hold_alloc_interior
# TYPE node_zfs_dnode_dnode_hold_alloc_interior untyped
node_zfs_dnode_dnode_hold_alloc_interior 0
# HELP node_zfs_dnode_dnode_hold_alloc_lock_misses kstat.zfs.misc.dnodestats.dnode_hold_alloc_lock_misses
# TYPE node_zfs_dnode_dnode_hold_alloc_lock_misses untyped
node_zfs_dnode_dnode_hold_alloc_lock_misses 0
# HELP node_zfs_dnode_dnode_hold_alloc_lock_retry kstat.zfs.misc.dnodestats.dnode_hold_alloc_lock_retry
# TYPE node_zfs_dnode_dnode_hold_alloc_lock_retry untyped
node_zfs_dnode_dnode_hold_alloc_lock_retry 0
# HELP node_zfs_dnode_dnode_hold_alloc_misses kstat.zfs.misc.dnodestats.dnode_hold_alloc_misses
# TYPE node_zfs_dnode_dnode_hold_alloc_misses untyped
node_zfs_dnode_dnode_hold_alloc_misses 0
# HELP node_zfs_dnode_dnode_hold_alloc_type_none kstat.zfs.misc.dnodestats.dnode_hold_alloc_type_none
# TYPE node_zfs_dnode_dnode_hold_alloc_type_none untyped
node_zfs_dnode_dnode_hold_alloc_type_none 0
# HELP node_zfs_dnode_dnode_hold_dbuf_hold kstat.zfs.misc.dnodestats.dnode_hold_dbuf_hold
# TYPE node_zfs_dnode_dnode_hold_dbuf_hold untyped
node_zfs_dnode_dnode_hold_dbuf_hold 0
# HELP node_zfs_dnode_dnode_hold_dbuf_read kstat.zfs.misc.dnodestats.dnode_hold_dbuf_read
# TYPE node_zfs_dnode_dnode_hold_dbuf_read untyped
node_zfs_dnode_dnode_hold_dbuf_read 0
# HELP node_zfs_dnode_dnode_hold_free_hits kstat.zfs.misc.dnodestats.dnode_hold_free_hits
# TYPE node_zfs_dnode_dnode_hold_free_hits untyped
node_zfs_dnode_dnode_hold_free_hits 0
# HELP node_zfs_dnode_dnode_hold_free_lock_misses kstat.zfs.misc.dnodestats.dnode_hold_free_lock_misses
# TYPE node_zfs_dnode_dnode_hold_free_lock_misses untyped
node_zfs_dnode_dnode_hold_free_lock_misses 0
# HELP node_zfs_dnode_dnode_hold_free_lock_retry kstat.zfs.misc.dnodestats.dnode_hold_free_lock_retry
# TYPE node_zfs_dnode_dnode_hold_free_lock_retry untyped
node_zfs_dnode_dnode_hold_free_lock_retry 0
# HELP node_zfs_dnode_dnode_hold_free_misses kstat.zfs.misc.dnodestats.dnode_hold_free_misses
# TYPE node_zfs_dnode_dnode_hold_free_misses untyped
node_zfs_dnode_dnode_hold_free_misses 0
# HELP node_zfs_dnode_dnode_hold_free_overflow kstat.zfs.misc.dnodestats.dnode_hold_free_overflow
# TYPE node_zfs_dnode_dnode_hold_free_overflow untyped
node_zfs_dnode_dnode_hold_free_overflow 0
# HELP node_zfs_dnode_dnode_hold_free_refcount kstat.zfs.misc.dnodestats.dnode_hold_free_refcount
# TYPE node_zfs_dnode_dnode_hold_free_refcount untyped
node_zfs_dnode_dnode_hold_free_refcount 0
# HELP node_zfs_dnode_dnode_hold_free_txg kstat.zfs.misc.dnodestats.dnode_hold_free_txg
# TYPE node_zfs_dnode_dnode_hold_free_txg untyped
node_zfs_dnode_dnode_hold_free_txg 0
# HELP node_zfs_dnode_dnode_move_active kstat.zfs.misc.dnodestats.dnode_move_active
# TYPE node_zfs_dnode_dnode_move_active untyped
node_zfs_dnode_dnode_move_active 0
# HELP node_zfs_dnode_dnode_move_handle kstat.zfs.misc.dnodestats.dnode_move_handle
# TYPE node_zfs_dnode_dnode_move_handle untyped
node_zfs_dnode_dnode_move_handle 0
# HELP node_zfs_dnode_dnode_move_invalid kstat.zfs.misc.dnodestats.dnode_move_invalid
# TYPE node_zfs_dnode_dnode_move_invalid untyped
node_zfs_dnode_dnode_move_invalid 0
# HELP node_zfs_dnode_dnode_move_recheck1 kstat.zfs.misc.dnodestats.dnode_move_recheck1
# TYPE node_zfs_dnode_dnode_move_recheck1 untyped
node_zfs_dnode_dnode_move_recheck1 0
# HELP node_zfs_dnode_dnode_move_recheck2 kstat.zfs.misc.dnodestats.dnode_move_recheck2
# TYPE node_zfs_dnode_dnode_move_recheck2 untyped
node_zfs_dnode_dnode_move_recheck2 0
# HELP node_zfs_dnode_dnode_move_rwlock kstat.zfs.misc.dnodestats.dnode_move_rwlock
# TYPE node_zfs_dnode_dnode_move_rwlock untyped
node_zfs_dnode_dnode_move_rwlock 0
# HELP node_zfs_dnode_dnode_move_special kstat.zfs.misc.dnodestats.dnode_move_special
# TYPE node_zfs_dnode_dnode_move_special untyped
node_zfs_dnode_dnode_move_special 0
# HELP node_zfs_dnode_dnode_reallocate kstat.zfs.misc.dnodestats.dnode_reallocate
# TYPE node_zfs_dnode_dnode_reallocate untyped
node_zfs_dnode_dnode_reallocate 0
2017-01-31 21:11:56 +01:00
# HELP node_zfs_fm_erpt_dropped kstat.zfs.misc.fm.erpt-dropped
# TYPE node_zfs_fm_erpt_dropped untyped
node_zfs_fm_erpt_dropped 18
# HELP node_zfs_fm_erpt_set_failed kstat.zfs.misc.fm.erpt-set-failed
# TYPE node_zfs_fm_erpt_set_failed untyped
node_zfs_fm_erpt_set_failed 0
# HELP node_zfs_fm_fmri_set_failed kstat.zfs.misc.fm.fmri-set-failed
# TYPE node_zfs_fm_fmri_set_failed untyped
node_zfs_fm_fmri_set_failed 0
# HELP node_zfs_fm_payload_set_failed kstat.zfs.misc.fm.payload-set-failed
# TYPE node_zfs_fm_payload_set_failed untyped
node_zfs_fm_payload_set_failed 0
2017-01-29 22:59:01 +01:00
# HELP node_zfs_vdev_cache_delegations kstat.zfs.misc.vdev_cache_stats.delegations
# TYPE node_zfs_vdev_cache_delegations untyped
node_zfs_vdev_cache_delegations 40
# HELP node_zfs_vdev_cache_hits kstat.zfs.misc.vdev_cache_stats.hits
# TYPE node_zfs_vdev_cache_hits untyped
node_zfs_vdev_cache_hits 0
# HELP node_zfs_vdev_cache_misses kstat.zfs.misc.vdev_cache_stats.misses
# TYPE node_zfs_vdev_cache_misses untyped
node_zfs_vdev_cache_misses 0
2018-02-16 15:46:31 +01:00
# HELP node_zfs_vdev_mirror_non_rotating_linear kstat.zfs.misc.vdev_mirror_stats.non_rotating_linear
# TYPE node_zfs_vdev_mirror_non_rotating_linear untyped
node_zfs_vdev_mirror_non_rotating_linear 0
# HELP node_zfs_vdev_mirror_non_rotating_seek kstat.zfs.misc.vdev_mirror_stats.non_rotating_seek
# TYPE node_zfs_vdev_mirror_non_rotating_seek untyped
node_zfs_vdev_mirror_non_rotating_seek 0
# HELP node_zfs_vdev_mirror_preferred_found kstat.zfs.misc.vdev_mirror_stats.preferred_found
# TYPE node_zfs_vdev_mirror_preferred_found untyped
node_zfs_vdev_mirror_preferred_found 0
# HELP node_zfs_vdev_mirror_preferred_not_found kstat.zfs.misc.vdev_mirror_stats.preferred_not_found
# TYPE node_zfs_vdev_mirror_preferred_not_found untyped
node_zfs_vdev_mirror_preferred_not_found 94
# HELP node_zfs_vdev_mirror_rotating_linear kstat.zfs.misc.vdev_mirror_stats.rotating_linear
# TYPE node_zfs_vdev_mirror_rotating_linear untyped
node_zfs_vdev_mirror_rotating_linear 0
# HELP node_zfs_vdev_mirror_rotating_offset kstat.zfs.misc.vdev_mirror_stats.rotating_offset
# TYPE node_zfs_vdev_mirror_rotating_offset untyped
node_zfs_vdev_mirror_rotating_offset 0
# HELP node_zfs_vdev_mirror_rotating_seek kstat.zfs.misc.vdev_mirror_stats.rotating_seek
# TYPE node_zfs_vdev_mirror_rotating_seek untyped
node_zfs_vdev_mirror_rotating_seek 0
2017-01-29 22:59:01 +01:00
# HELP node_zfs_xuio_onloan_read_buf kstat.zfs.misc.xuio_stats.onloan_read_buf
# TYPE node_zfs_xuio_onloan_read_buf untyped
node_zfs_xuio_onloan_read_buf 32
# HELP node_zfs_xuio_onloan_write_buf kstat.zfs.misc.xuio_stats.onloan_write_buf
# TYPE node_zfs_xuio_onloan_write_buf untyped
node_zfs_xuio_onloan_write_buf 0
# HELP node_zfs_xuio_read_buf_copied kstat.zfs.misc.xuio_stats.read_buf_copied
# TYPE node_zfs_xuio_read_buf_copied untyped
node_zfs_xuio_read_buf_copied 0
# HELP node_zfs_xuio_read_buf_nocopy kstat.zfs.misc.xuio_stats.read_buf_nocopy
# TYPE node_zfs_xuio_read_buf_nocopy untyped
node_zfs_xuio_read_buf_nocopy 0
# HELP node_zfs_xuio_write_buf_copied kstat.zfs.misc.xuio_stats.write_buf_copied
# TYPE node_zfs_xuio_write_buf_copied untyped
node_zfs_xuio_write_buf_copied 0
# HELP node_zfs_xuio_write_buf_nocopy kstat.zfs.misc.xuio_stats.write_buf_nocopy
# TYPE node_zfs_xuio_write_buf_nocopy untyped
node_zfs_xuio_write_buf_nocopy 0
# HELP node_zfs_zfetch_bogus_streams kstat.zfs.misc.zfetchstats.bogus_streams
# TYPE node_zfs_zfetch_bogus_streams untyped
node_zfs_zfetch_bogus_streams 0
# HELP node_zfs_zfetch_colinear_hits kstat.zfs.misc.zfetchstats.colinear_hits
# TYPE node_zfs_zfetch_colinear_hits untyped
node_zfs_zfetch_colinear_hits 0
# HELP node_zfs_zfetch_colinear_misses kstat.zfs.misc.zfetchstats.colinear_misses
# TYPE node_zfs_zfetch_colinear_misses untyped
node_zfs_zfetch_colinear_misses 11
# HELP node_zfs_zfetch_hits kstat.zfs.misc.zfetchstats.hits
# TYPE node_zfs_zfetch_hits untyped
node_zfs_zfetch_hits 7.067992e+06
# HELP node_zfs_zfetch_misses kstat.zfs.misc.zfetchstats.misses
# TYPE node_zfs_zfetch_misses untyped
node_zfs_zfetch_misses 11
# HELP node_zfs_zfetch_reclaim_failures kstat.zfs.misc.zfetchstats.reclaim_failures
# TYPE node_zfs_zfetch_reclaim_failures untyped
node_zfs_zfetch_reclaim_failures 11
# HELP node_zfs_zfetch_reclaim_successes kstat.zfs.misc.zfetchstats.reclaim_successes
# TYPE node_zfs_zfetch_reclaim_successes untyped
node_zfs_zfetch_reclaim_successes 0
# HELP node_zfs_zfetch_streams_noresets kstat.zfs.misc.zfetchstats.streams_noresets
# TYPE node_zfs_zfetch_streams_noresets untyped
node_zfs_zfetch_streams_noresets 2
# HELP node_zfs_zfetch_streams_resets kstat.zfs.misc.zfetchstats.streams_resets
# TYPE node_zfs_zfetch_streams_resets untyped
node_zfs_zfetch_streams_resets 0
# HELP node_zfs_zfetch_stride_hits kstat.zfs.misc.zfetchstats.stride_hits
# TYPE node_zfs_zfetch_stride_hits untyped
node_zfs_zfetch_stride_hits 7.06799e+06
# HELP node_zfs_zfetch_stride_misses kstat.zfs.misc.zfetchstats.stride_misses
# TYPE node_zfs_zfetch_stride_misses untyped
node_zfs_zfetch_stride_misses 0
# HELP node_zfs_zil_zil_commit_count kstat.zfs.misc.zil.zil_commit_count
# TYPE node_zfs_zil_zil_commit_count untyped
node_zfs_zil_zil_commit_count 10
# HELP node_zfs_zil_zil_commit_writer_count kstat.zfs.misc.zil.zil_commit_writer_count
# TYPE node_zfs_zil_zil_commit_writer_count untyped
node_zfs_zil_zil_commit_writer_count 0
# HELP node_zfs_zil_zil_itx_copied_bytes kstat.zfs.misc.zil.zil_itx_copied_bytes
# TYPE node_zfs_zil_zil_itx_copied_bytes untyped
node_zfs_zil_zil_itx_copied_bytes 0
# HELP node_zfs_zil_zil_itx_copied_count kstat.zfs.misc.zil.zil_itx_copied_count
# TYPE node_zfs_zil_zil_itx_copied_count untyped
node_zfs_zil_zil_itx_copied_count 0
# HELP node_zfs_zil_zil_itx_count kstat.zfs.misc.zil.zil_itx_count
# TYPE node_zfs_zil_zil_itx_count untyped
node_zfs_zil_zil_itx_count 0
# HELP node_zfs_zil_zil_itx_indirect_bytes kstat.zfs.misc.zil.zil_itx_indirect_bytes
# TYPE node_zfs_zil_zil_itx_indirect_bytes untyped
node_zfs_zil_zil_itx_indirect_bytes 0
# HELP node_zfs_zil_zil_itx_indirect_count kstat.zfs.misc.zil.zil_itx_indirect_count
# TYPE node_zfs_zil_zil_itx_indirect_count untyped
node_zfs_zil_zil_itx_indirect_count 0
# HELP node_zfs_zil_zil_itx_metaslab_normal_bytes kstat.zfs.misc.zil.zil_itx_metaslab_normal_bytes
# TYPE node_zfs_zil_zil_itx_metaslab_normal_bytes untyped
node_zfs_zil_zil_itx_metaslab_normal_bytes 0
# HELP node_zfs_zil_zil_itx_metaslab_normal_count kstat.zfs.misc.zil.zil_itx_metaslab_normal_count
# TYPE node_zfs_zil_zil_itx_metaslab_normal_count untyped
node_zfs_zil_zil_itx_metaslab_normal_count 0
# HELP node_zfs_zil_zil_itx_metaslab_slog_bytes kstat.zfs.misc.zil.zil_itx_metaslab_slog_bytes
# TYPE node_zfs_zil_zil_itx_metaslab_slog_bytes untyped
node_zfs_zil_zil_itx_metaslab_slog_bytes 0
# HELP node_zfs_zil_zil_itx_metaslab_slog_count kstat.zfs.misc.zil.zil_itx_metaslab_slog_count
# TYPE node_zfs_zil_zil_itx_metaslab_slog_count untyped
node_zfs_zil_zil_itx_metaslab_slog_count 0
# HELP node_zfs_zil_zil_itx_needcopy_bytes kstat.zfs.misc.zil.zil_itx_needcopy_bytes
# TYPE node_zfs_zil_zil_itx_needcopy_bytes untyped
2018-02-16 15:46:31 +01:00
node_zfs_zil_zil_itx_needcopy_bytes 1.8446744073709537e+19
2017-01-29 22:59:01 +01:00
# HELP node_zfs_zil_zil_itx_needcopy_count kstat.zfs.misc.zil.zil_itx_needcopy_count
# TYPE node_zfs_zil_zil_itx_needcopy_count untyped
node_zfs_zil_zil_itx_needcopy_count 0
2020-05-13 21:06:00 +02:00
# HELP node_zfs_zpool_dataset_nread kstat.zfs.misc.objset.nread
# TYPE node_zfs_zpool_dataset_nread untyped
node_zfs_zpool_dataset_nread{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_nread{dataset="pool1/dataset1",zpool="pool1"} 28
node_zfs_zpool_dataset_nread{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_nread{dataset="poolz1/dataset1",zpool="poolz1"} 28
# HELP node_zfs_zpool_dataset_nunlinked kstat.zfs.misc.objset.nunlinked
# TYPE node_zfs_zpool_dataset_nunlinked untyped
node_zfs_zpool_dataset_nunlinked{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_nunlinked{dataset="pool1/dataset1",zpool="pool1"} 3
node_zfs_zpool_dataset_nunlinked{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_nunlinked{dataset="poolz1/dataset1",zpool="poolz1"} 14
# HELP node_zfs_zpool_dataset_nunlinks kstat.zfs.misc.objset.nunlinks
# TYPE node_zfs_zpool_dataset_nunlinks untyped
node_zfs_zpool_dataset_nunlinks{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_nunlinks{dataset="pool1/dataset1",zpool="pool1"} 3
node_zfs_zpool_dataset_nunlinks{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_nunlinks{dataset="poolz1/dataset1",zpool="poolz1"} 14
# HELP node_zfs_zpool_dataset_nwritten kstat.zfs.misc.objset.nwritten
# TYPE node_zfs_zpool_dataset_nwritten untyped
node_zfs_zpool_dataset_nwritten{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_nwritten{dataset="pool1/dataset1",zpool="pool1"} 12302
node_zfs_zpool_dataset_nwritten{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_nwritten{dataset="poolz1/dataset1",zpool="poolz1"} 32806
# HELP node_zfs_zpool_dataset_reads kstat.zfs.misc.objset.reads
# TYPE node_zfs_zpool_dataset_reads untyped
node_zfs_zpool_dataset_reads{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_reads{dataset="pool1/dataset1",zpool="pool1"} 2
node_zfs_zpool_dataset_reads{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_reads{dataset="poolz1/dataset1",zpool="poolz1"} 2
# HELP node_zfs_zpool_dataset_writes kstat.zfs.misc.objset.writes
# TYPE node_zfs_zpool_dataset_writes untyped
node_zfs_zpool_dataset_writes{dataset="pool1",zpool="pool1"} 0
node_zfs_zpool_dataset_writes{dataset="pool1/dataset1",zpool="pool1"} 4
node_zfs_zpool_dataset_writes{dataset="poolz1",zpool="poolz1"} 0
node_zfs_zpool_dataset_writes{dataset="poolz1/dataset1",zpool="poolz1"} 10
2017-02-02 00:32:26 +01:00
# HELP node_zfs_zpool_nread kstat.zfs.misc.io.nread
# TYPE node_zfs_zpool_nread untyped
node_zfs_zpool_nread{zpool="pool1"} 1.88416e+06
node_zfs_zpool_nread{zpool="poolz1"} 2.82624e+06
# HELP node_zfs_zpool_nwritten kstat.zfs.misc.io.nwritten
# TYPE node_zfs_zpool_nwritten untyped
node_zfs_zpool_nwritten{zpool="pool1"} 3.206144e+06
node_zfs_zpool_nwritten{zpool="poolz1"} 2.680501248e+09
# HELP node_zfs_zpool_rcnt kstat.zfs.misc.io.rcnt
# TYPE node_zfs_zpool_rcnt untyped
node_zfs_zpool_rcnt{zpool="pool1"} 0
node_zfs_zpool_rcnt{zpool="poolz1"} 0
# HELP node_zfs_zpool_reads kstat.zfs.misc.io.reads
# TYPE node_zfs_zpool_reads untyped
node_zfs_zpool_reads{zpool="pool1"} 22
node_zfs_zpool_reads{zpool="poolz1"} 33
# HELP node_zfs_zpool_rlentime kstat.zfs.misc.io.rlentime
# TYPE node_zfs_zpool_rlentime untyped
node_zfs_zpool_rlentime{zpool="pool1"} 1.04112268e+08
node_zfs_zpool_rlentime{zpool="poolz1"} 6.472105124093e+12
# HELP node_zfs_zpool_rtime kstat.zfs.misc.io.rtime
# TYPE node_zfs_zpool_rtime untyped
node_zfs_zpool_rtime{zpool="pool1"} 2.4168078e+07
node_zfs_zpool_rtime{zpool="poolz1"} 9.82909164e+09
# HELP node_zfs_zpool_rupdate kstat.zfs.misc.io.rupdate
# TYPE node_zfs_zpool_rupdate untyped
node_zfs_zpool_rupdate{zpool="pool1"} 7.921048984922e+13
node_zfs_zpool_rupdate{zpool="poolz1"} 1.10734831944501e+14
# HELP node_zfs_zpool_wcnt kstat.zfs.misc.io.wcnt
# TYPE node_zfs_zpool_wcnt untyped
node_zfs_zpool_wcnt{zpool="pool1"} 0
node_zfs_zpool_wcnt{zpool="poolz1"} 0
# HELP node_zfs_zpool_wlentime kstat.zfs.misc.io.wlentime
# TYPE node_zfs_zpool_wlentime untyped
node_zfs_zpool_wlentime{zpool="pool1"} 1.04112268e+08
node_zfs_zpool_wlentime{zpool="poolz1"} 6.472105124093e+12
# HELP node_zfs_zpool_writes kstat.zfs.misc.io.writes
# TYPE node_zfs_zpool_writes untyped
node_zfs_zpool_writes{zpool="pool1"} 132
node_zfs_zpool_writes{zpool="poolz1"} 25294
# HELP node_zfs_zpool_wtime kstat.zfs.misc.io.wtime
# TYPE node_zfs_zpool_wtime untyped
node_zfs_zpool_wtime{zpool="pool1"} 7.155162e+06
node_zfs_zpool_wtime{zpool="poolz1"} 9.673715628e+09
# HELP node_zfs_zpool_wupdate kstat.zfs.misc.io.wupdate
# TYPE node_zfs_zpool_wupdate untyped
node_zfs_zpool_wupdate{zpool="pool1"} 7.9210489694949e+13
node_zfs_zpool_wupdate{zpool="poolz1"} 1.10734831833266e+14
2015-09-26 20:54:49 +02:00
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
2018-09-17 17:09:52 +02:00
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
2019-07-28 10:37:10 +02:00
# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
2018-02-19 15:44:59 +01:00
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 0
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
2015-09-26 20:54:49 +02:00
# HELP testmetric1_1 Metric read from collector/fixtures/textfile/two_metric_files/metrics1.prom
# TYPE testmetric1_1 untyped
testmetric1_1{foo="bar"} 10
# HELP testmetric1_2 Metric read from collector/fixtures/textfile/two_metric_files/metrics1.prom
# TYPE testmetric1_2 untyped
testmetric1_2{foo="baz"} 20
# HELP testmetric2_1 Metric read from collector/fixtures/textfile/two_metric_files/metrics2.prom
# TYPE testmetric2_1 untyped
2017-12-24 11:54:33 +01:00
testmetric2_1{foo="bar"} 30
2015-09-26 20:54:49 +02:00
# HELP testmetric2_2 Metric read from collector/fixtures/textfile/two_metric_files/metrics2.prom
# TYPE testmetric2_2 untyped
2017-12-24 11:54:33 +01:00
testmetric2_2{foo="baz"} 40