mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-23 12:31:07 +01:00
docs/CaseStudies.md: update Grammarly case study with the newly published article https://www.grammarly.com/blog/engineering/monitoring-with-victoriametrics/
Follow-up c0246b2e17
This commit is contained in:
parent
7be3848ee6
commit
2e04ddbd32
@ -336,36 +336,7 @@ Numbers:
|
||||
|
||||
[Grammarly](https://www.grammarly.com/) provides digital writing assistant that helps 30 million people and 30 thousand teams write more clearly and effectively every day. In building a product that scales across multiple platforms and devices, Grammarly works to empower users whenever and wherever they communicate.
|
||||
|
||||
> Maintenance and scaling for our previous on-premise monitoring system was hard and required a lot of effort from our side. The previous system was not optimized for storing frequently changing metrics (moderate [churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate) was a concern). The costs of the previous solution were not optimal.
|
||||
|
||||
> We evaluated various cloud-based and on-premise monitoring solutions: Sumo Logic, DataDog, SignalFX, Amazon CloudWatch, Prometheus, M3DB, Thanos, Graphite, etc. PoC results were sufficient for us to move forward with VictoriaMetrics due to the following reasons:
|
||||
|
||||
- High performance
|
||||
- Support for Graphite and OpenMetrics data ingestion types
|
||||
- Good documentation and easy bootstrap
|
||||
- Responsiveness of VictoriaMetrics support team during research and afterward
|
||||
|
||||
> Switching from our previous on-premise monitoring system to VictoriaMetrics allowed reducing infrastructure costs by an order of magnitude while improving DevOps experience and developer experience.
|
||||
|
||||
Numbers:
|
||||
|
||||
- [Cluster version](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html) of VictoriaMetrics
|
||||
- Active time series: 35M
|
||||
- Ingestion rate: 950K new samples per second
|
||||
- Total number of datapoints: 44 trillions
|
||||
- Churn rate: 27M new time series per day
|
||||
- Data size on disk: 23 TB
|
||||
- Index size on disk: 700 GB
|
||||
- The average datapoint size on disk: 0.5 bytes
|
||||
- Query rate:
|
||||
- `/api/v1/query_range`: 350 queries per second
|
||||
- `/api/v1/query`: 24 queries per second
|
||||
- Query duration:
|
||||
- 99th percentile: 500 milliseconds
|
||||
- 90th percentile: 70 milliseconds
|
||||
- median: 2 milliseconds
|
||||
- CPU usage: 12 CPU cores
|
||||
- RAM usage: 250 GB
|
||||
See [this blogpost on how Grammarly reduces costs and maintenance burden for their observability solution by 10x after switching to VistoriaMetrics](https://www.grammarly.com/blog/engineering/monitoring-with-victoriametrics/).
|
||||
|
||||
## Groove X
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user