I run a small home lab - number of servers varies from time to time. Currently five, all Linux.
When I heard about log consolidation I imagined that I would get a nice dashboard type view where I could see a consolidated, real time, view of all my server logs go by. Victoria Logs does that for me. I also imagined that there would be a way to flag particular log entries as “normal, and expected” so they would be excluded in the future - the goal being to get this dashboard to a state where if anything appears, it’s probably bad. I can’t see a way to do that in Victoria Logs. Do I need to try harder? If Victoria Logs won’t do it - is there anything that will?
What are you using to ship the logs to VL?
If you want to exclude “normal” logs you should start excluding them before they reach VL, so the only logs you have are the interesting ones.
What are you using to ship the logs to VL?
That’s the reason I’m here asking about logging. I’m in the process of changing and wondering if I should switch it all up. I was using systemd-journal-remote, but I’m switching from Debian to Alpine so - no more systemd.
you should start excluding them before they reach VL
Now that confuses me. As I said in my original post - I had some preconceptions about centralised logging before I set it up, and having a single place to manage filters was certainly something I was hoping to get from it. Also any filtering would only be for reporting. I’d like to keep a full set of log data for potential problem analysis etc.
VL is really about aggregation, not displaying it. You’d probably just need to setup a grafana dashboard with filters for all your normal traffic
In case you decide to look for alternatives, I would probably go with elastic/filebeat/grafana, a fairly standard log monitoring suite. Not saying it’s better or worse than Victoria Logs, which i have no experience with.
I’m already running a grafana instance, so I’ll look into elastic/filebeat. Thanks.
Elastic is heaaaaavy. You might want to check out Loki, I haven’t used it but I think it’d be easier to get started with than Victoria logs since it integrates tightly with grafana
Yeah, I’ve been doing some more reading. Victoria Logs is doing a good job consolidating my logs and is very lightweight. It’s the visualisation that I’m missing. Grafana can do it, but I’m having trouble getting my head around it. That’s OK - it’s just my home lab and it’s mainly a learning exercise - I need to learn some more.
Yeah I use VL for lemmy.ca and it’s super quick and lightweight, but getting what you want into grafana can be difficult.
The more you can filter and label at the source, the less you have to work out in VL.
I use alloy (which is kinda heavy) to extract and prepare only the data I want and it works great so far.



