Amazing product. We use VictoriaMetrics for quite a while, and previously used Loki and our custom Clickhouse/Vector approach for logs and we have switched to VictoriaLogs. It is much better and faster than Loki, same goes with custom CH/Vector thing we had. Kudos to the team, we are waiting for VictoriaTraces to switch Tempo instance to it for opentelemetry stuff.
Victoria logs is awesome and has put me at odds with my new devops hire who hates it and insists that Dynatrace can do the same thing for only $$$ a month. I tried using the their front end and it’s horrible. The API for Victoria is awesome and it’s fast.
I’m actually working on something right now where this can be extremely useful to me and I didn’t know about VictoriaLogs was using Loki. I wonder if anyone knows if there are other better alternatives or how this stacks up?
We recently dropped-in VictoriaLogs for an old rsyslog setup. Extremely easy to deploy and admin, integrates nicely with Grafana. Definitely recommend doing this and moving to structured logging formats if you aren't already / are hesitant about the complexity of Elastic stack, etc.
Amazing product. We use VictoriaMetrics for quite a while, and previously used Loki and our custom Clickhouse/Vector approach for logs and we have switched to VictoriaLogs. It is much better and faster than Loki, same goes with custom CH/Vector thing we had. Kudos to the team, we are waiting for VictoriaTraces to switch Tempo instance to it for opentelemetry stuff.
Can you talk a little bit about your Victoria Logs setup? About how many logs are you ingesting and what kind of sizing do you have on your setup?
Sure thing!
Ingested logs 24h: 428 Mil Ingested bytes 24h: 625GB Inser req/s: 6k/s
8vCPU, 16GB mem. Running standard-rwo PVC on GCP.
We have a couple of projects like this with similar usage and similar machine sizing.
Still running vmlogs-single, and we will until we see a need to move to vmlogs-cluster version.
That sounds like a lot of resources provisioned for 6k/s. 625GB/24hr is a small footprint.
I would have said it sounded pretty good. What technologies are you comparing it against, out of curiosity.
ClickHouse
Can you share any additional details? What kind of ingestion do you have, with what dimensions of a clickhouse cluster?
I'm also curious how it handles structured vs unstructured logs.
Thanks!
That seems pretty good. Do you have any sort of HA solution?
personally Hetzner SX295 that has 14x 22 TB on a ZFS setup
It ingests 70k lines per second without a sweat
reads are just as fast
Victoria logs is awesome and has put me at odds with my new devops hire who hates it and insists that Dynatrace can do the same thing for only $$$ a month. I tried using the their front end and it’s horrible. The API for Victoria is awesome and it’s fast.
I use it in docker on a NAS - VictoriaMetrics, VictoriaLogs, Grafana - low resource usage, fast, so far zero issues.
I’m actually working on something right now where this can be extremely useful to me and I didn’t know about VictoriaLogs was using Loki. I wonder if anyone knows if there are other better alternatives or how this stacks up?
> VictoriaLogs was using Loki
Someone can correct me if I am wrong here - I don't think it's the case tho. Especially, it's competing against Loki and Elastic
https://itnext.io/why-victorialogs-is-a-better-alternative-t...
I think the GP intended to put some punctuation between ‘victorialogs’ and ‘was’
We recently dropped-in VictoriaLogs for an old rsyslog setup. Extremely easy to deploy and admin, integrates nicely with Grafana. Definitely recommend doing this and moving to structured logging formats if you aren't already / are hesitant about the complexity of Elastic stack, etc.
There is no complexity in Elastic stack, only expensive footguns /s