docs: remove <p> for imanges (#5702)

Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
This commit is contained in:
Artem Navoiev 2024-01-26 13:06:48 -08:00 committed by Aliaksandr Valialkin
parent 36fa314161
commit d42908133c
No known key found for this signature in database
GPG Key ID: 52C003EE2BCDB9EB
11 changed files with 31 additions and 81 deletions

View File

@ -45,9 +45,7 @@ Each service may scale independently and may run on the most suitable hardware.
This is a [shared nothing architecture](https://en.wikipedia.org/wiki/Shared-nothing_architecture).
It increases cluster availability, and simplifies cluster maintenance as well as cluster scaling.
<p align="center">
<img src="docs/Cluster-VictoriaMetrics_cluster-scheme.webp" width="800">
</p>
## Multitenancy

View File

@ -56,9 +56,7 @@ Each service may scale independently and may run on the most suitable hardware.
This is a [shared nothing architecture](https://en.wikipedia.org/wiki/Shared-nothing_architecture).
It increases cluster availability, and simplifies cluster maintenance as well as cluster scaling.
<p align="center">
<img src="Cluster-VictoriaMetrics_cluster-scheme.webp" width="800">
</p>
## Multitenancy

View File

@ -520,9 +520,7 @@ via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit
DataDog agent allows configuring destinations for metrics sending via ENV variable `DD_DD_URL`
or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) in section `dd_url`.
<p align="center">
<img src="Single-server-VictoriaMetrics-sending_DD_metrics_to_VM.webp" width="800">
</p>
To configure DataDog agent via ENV variable add the following prefix:
@ -549,9 +547,7 @@ pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.
DataDog allows configuring [Dual Shipping](https://docs.datadoghq.com/agent/guide/dual-shipping/) for metrics
sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `additional_endpoints`.
<p align="center">
<img src="Single-server-VictoriaMetrics-sending_DD_metrics_to_VM_and_DD.webp" width="800">
</p>
Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver:

View File

@ -528,9 +528,7 @@ via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit
DataDog agent allows configuring destinations for metrics sending via ENV variable `DD_DD_URL`
or via [configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) in section `dd_url`.
<p align="center">
<img src="Single-server-VictoriaMetrics-sending_DD_metrics_to_VM.webp" width="800">
</p>
To configure DataDog agent via ENV variable add the following prefix:
@ -557,9 +555,7 @@ pick [single-node or cluster URL](https://docs.victoriametrics.com/url-examples.
DataDog allows configuring [Dual Shipping](https://docs.datadoghq.com/agent/guide/dual-shipping/) for metrics
sending via ENV variable `DD_ADDITIONAL_ENDPOINTS` or via configuration file `additional_endpoints`.
<p align="center">
<img src="Single-server-VictoriaMetrics-sending_DD_metrics_to_VM_and_DD.webp" width="800">
</p>
Run DataDog using the following ENV variable with VictoriaMetrics as additional metrics receiver:

View File

@ -451,9 +451,7 @@ networks:
Before running our docker-compose make sure that your directory contains all required files:
<p align="center">
<img src="guide-vmanomaly-vmalert/guide-vmanomaly-vmalert_files.webp" max-width="1000" alt="all files">
</p>
<img src="guide-vmanomaly-vmalert/guide-vmanomaly-vmalert_files.webp" width="800" alt="all files">
This docker-compose file will pull docker images, set up each service and run them all together with the command:

View File

@ -231,9 +231,7 @@ Forwarding from [::1]:8429 -> 8429
To check that `VMAgent` collects metrics from the k8s cluster open in the browser [http://127.0.0.1:8429/targets](http://127.0.0.1:8429/targets) .
You will see something like this:
<p align="center">
<img src="getting-started-with-vm-operator_vmcluster.webp" width="800" alt="">
</p>
`VMAgent` connects to [kubernetes service discovery](https://kubernetes.io/docs/concepts/services-networking/service/) and gets targets which needs to be scraped. This service discovery is controlled by [VictoriaMetrics Operator](https://github.com/VictoriaMetrics/operator)
@ -311,15 +309,11 @@ EOF
To check that [VictoriaMetrics](https://victoriametrics.com) collecting metrics from the k8s cluster open in your browser [http://127.0.0.1:3000/dashboards](http://127.0.0.1:3000/dashboards) and choose the `VictoriaMetrics - cluster` dashboard. Use `admin` for login and the `password` that you previously got from kubectl.
<p align="center">
<img src="getting-started-with-vm-operator_vmcluster-grafana1.webp" width="800" alt="grafana dashboards">
</p>
The expected output is:
<p align="center">
<img src="getting-started-with-vm-operator_vmcluster-grafana2.webp" width="800" alt="grafana dashboards">
</p>
## 6. Summary

View File

@ -34,9 +34,7 @@ The [-retentionPeriod](https://docs.victoriametrics.com/#retention) sets how lon
The diagram below shows a proposed solution
<p align="center">
<img src="guide-vmcluster-multiple-retention-setup.webp" width="800">
</p>
**Implementation Details**

View File

@ -377,21 +377,15 @@ The expected result of the query `count(up{kubernetes_pod_name=~".*vmselect.*"})
To test via Grafana, we need to install it first. [Install and connect Grafana to VictoriaMetrics](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster.html#4-install-and-connect-grafana-to-victoriametrics-with-helm), login into Grafana and open the metrics [Explore](http://127.0.0.1:3000/explore) page.
<p align="center">
<img src="k8s-ha-monitoring-via-vm-cluster_explore.webp" width="800" alt="grafana explore">
</p>
Choose `victoriametrics` from the list of datasources and enter `count(up{kubernetes_pod_name=~".*vmselect.*"})` to the **Metric browser** field as shown on the screenshot, then press **Run query** button:
<p align="center">
<img src="k8s-ha-monitoring-via-vm-cluster_explore-count-up.webp" width="800" alt="">
</p>
The expected output is:
<p align="center">
<img src="k8s-ha-monitoring-via-vm-cluster_explore-count-up-graph.webp" width="800" alt="">
</p>
## 5. High Availability
@ -423,17 +417,13 @@ Return to Grafana Explore and press the **Run query** button again.
The expected output is:
<p align="center">
<img src="k8s-ha-monitoring-via-vm-cluster_explore-count-up-graph.webp" width="800" alt="">
</p>
As you can see, after we scaled down the `vmstorage` replicas number from three to two pods, metrics are still available and correct. The response is not partial as it was before scaling. Also we see that query `count(up{kubernetes_pod_name=~".*vmselect.*"})` returns the same value as before.
To confirm that the number of `vmstorage` pods is equivalent to two, execute the following request in Grafana Explore:
<p align="center">
<img src="k8s-ha-monitoring-via-vm-cluster_explore-count-up-graph2.webp" width="800" alt="">
</p>
## 6. Final thoughts

View File

@ -26,9 +26,7 @@ We will use:
* [Helm 3 ](https://helm.sh/docs/intro/install)
* [kubectl 1.21](https://kubernetes.io/docs/tasks/tools/install-kubectl)
<p align="center">
<img src="k8s-monitoring-via-vm-cluster_scheme.webp" width="800" alt="VictoriaMetrics Cluster on Kubernetes cluster">
</p>
## 1. VictoriaMetrics Helm repository
@ -535,24 +533,19 @@ kubectl --namespace default port-forward $POD_NAME 3000
To check that [VictoriaMetrics](https://victoriametrics.com) collects metrics from k8s cluster open in browser [http://127.0.0.1:3000/dashboards](http://127.0.0.1:3000/dashboards) and choose the `Kubernetes Cluster Monitoring (via Prometheus)` dashboard. Use `admin` for login and `password` that you previously got from kubectl.
<p align="center">
<img src="k8s-monitoring-via-vm-cluster_dashes-agent.webp" width="800" alt="grafana dashboards">
</p>
You will see something like this:
<p align="center">
<img src="k8s-monitoring-via-vm-cluster_dashboard.webp" width="800" alt="Kubernetes metrics provided by vmcluster">
</p>
The VictoriaMetrics dashboard is also available to use:
<p align="center">
<img src="k8s-monitoring-via-vm-cluster_grafana-dash.webp" width="800" alt="VictoriaMetrics cluster dashboard">
</p>
vmagent has its own dashboard:
<p align="center">
<img src="k8s-monitoring-via-vm-cluster_vmagent-grafana-dash.webp" width="800" alt="vmagent dashboard">
</p>
## 6. Final thoughts

View File

@ -26,9 +26,7 @@ We will use:
* [Helm 3 ](https://helm.sh/docs/intro/install)
* [kubectl 1.21](https://kubernetes.io/docs/tasks/tools/install-kubectl)
<p align="center">
<img src="k8s-monitoring-via-vm-single_k8s-scheme.webp" width="800" alt="VictoriaMetrics Single on Kubernetes cluster">
</p>
## 1. VictoriaMetrics Helm repository
@ -340,19 +338,15 @@ Now Grafana should be accessible on the [http://127.0.0.1:3000](http://127.0.0.1
To check that VictoriaMetrics has collects metrics from the k8s cluster open in browser [http://127.0.0.1:3000/dashboards](http://127.0.0.1:3000/dashboards) and choose `Kubernetes Cluster Monitoring (via Prometheus)` dashboard. Use `admin` for login and `password` that you previously obtained from kubectl.
<p align="center">
<img src="k8s-monitoring-via-vm-single_grafana-dashboards.webp" width="800" alt="">
</p>
You will see something like this:
<p align="center">
<img src="k8s-monitoring-via-vm-single_grafana-k8s-dashboard.webp" width="800" alt="">
</p>
VictoriaMetrics dashboard also available to use:
<p align="center">
<img src="k8s-monitoring-via-vm-single_grafana.webp" width="800" alt="">
</p>
## 5. Final thoughts

View File

@ -16,12 +16,7 @@ Let's cover the case. You have multiple regions with workloads and want to colle
The monitoring setup is in the dedicated regions as shown below:
<p align="center">
<img
src="multi-regional-setup-dedicated-regions.webp"
width="800"
alt="Multi-regional setup with VictoriaMetrics: Dedicated regions for monitoring">
</p>
<img src="multi-regional-setup-dedicated-regions.webp" width="800" alt="Multi-regional setup with VictoriaMetrics: Dedicated regions for monitoring">
Every workload region (Earth, Mars, Venus) has a vmagent that sends data to multiple regions with a monitoring setup.
The monitoring setup (Ground Control 1,2) contains VictoriaMetrics Time Series Database(TSDB) cluster or single.