Solved Logs monitoring software or methods

Solved issue

CaffeineAddict

Well-Known Member
Joined
Jan 21, 2024
Messages
1,708
Reaction score
1,221
Credits
14,168
One thing I really miss in Linux is a GUI to monitor, filter and oversee logs and to subscribe to certain logs.
I've been googling around and from what I see all this shiny log monitoring software is commercial, there is barely any free.

So my first question is, is there free one and which one would you suggest?

Second question, what other methods do you use to monitor logs with ease?
 


Last edited:
Thanks but it doesn't appear to have UI? there are no screenshots not even on google image search.

Logfile Navigator also looks promising. It is recommended in web articles here, here, and here.
It looks pretty advanced and I like that it combines log files into one according to timestamp, I'll certainly try it out.
But likewise it's for terminal, not GUI, although it's perhaps not a problem because the features of lnav are very promising.
 
Thanks but it doesn't appear to have UI? there are no screenshots not even on google image search.


It looks pretty advanced and I like that it combines log files into one according to timestamp, I'll certainly try it out.
But likewise it's for terminal, not GUI, although it's perhaps not a problem because the features of lnav are very promising.
gnome-system-log and ksystemlog both have a GUI - which you can get through the Synaptic Package Manager - loggedfs is only CLI
 
I'm not fully sure this is what you're looking for, but Grafana Cloud is free up to 50GB of logs per month.


You can just configure a log shipper like promtail to push the logs to Grafana Cloud.

We use this for both our Kubernetes clusters and regular Linux logs to a central location where they can be monitored, alerted on, or reviewed.
 
gnome-system-log and ksystemlog both have a GUI - which you can get through the Synaptic Package Manager - loggedfs is only CLI
Sorry didn't know that but ksystemlog is exactly the thing since I'm using KDE, thanks!

Honestly, now I already installed both lnav and ksystemlog to test suggestions and both are great, I'll experiment with both and decide over time which is better for me, until then I'll be using both for now.

I'm not fully sure this is what you're looking for, but Grafana Cloud is free up to 50GB of logs per month.
Thank you for sharing, I think I have no need for cloud solutions because all my PC's are on LAN, no enterprise.
I'm looking only for solutions that are open source and without paid plans.

---

Even though I'm happy with suggestions so far I'll leave this thread unsolved because want to hear as many opinions and suggestions as anyone might have.
 
You can spin up a Grafana stack locally, either natively or with a relatively simple docker compose file. Grafana is open source. There's some enterprise offerings, but if this is just for the standard logs of a set of linux machines, you won't need them.
 
For larger hosting setups (>10 servers) I use central logfiles using systemd-journal-remote, as described in this blogpost, which then sends all systemd journals to one central server. The blogpost describes this using a private, secured network (in our case we use wireguard mesh for this). If you don't have this and you have to send this over the public internet, make sure to use https:// to tranfer the logfiles (you can use self signed or, better, letsencrypt certificates).
Once everything is one place I setup a graylog on top of that.

Graylog has its own tools to collect logfiles, which I don't like. They appeared unreliable in testing and systemd is the default for Debian, which is what I use on servers.

systemd-journal-remote is extremely reliable, even if your central logserver is down for several days it will store the logs and then upload them when the central log server is available again.

The logs of each log-client server are still stored as normal on each server in the journal.

Can you share some insight on what exactly you want to do? Just store logs in a central place or have "a dashboard" which analyzes logs over $timeframe, or have alerts on certain log lines?
 
systemd-journal-remote
Thank you, I'll take a look into it as soon as I get time to spend more time on PC, collecting logs from all PC's on LAN sounds great.

Can you share some insight on what exactly you want to do? Just store logs in a central place or have "a dashboard" which analyzes logs over $timeframe, or have alerts on certain log lines?
I'm mostly interested to harvest all logs on local system only, dashboard style UI would be great, here is sample screenshot that got me think about this and to ask this question:

EventLog-Analyzer.png


This is how commercial software UI looks like, there is hardly any free with such dashboard.
But given solutions so far are good as well.
 
nah no commercial stuff :p There is enough open source for it.

If you read a bit into journalctl, you can actually do pretty nifty things with it. But it is for the cli of course.

I think I'd go with graylog. There you can do VERY nifty things using the search. I'm generally not a fan of dashboards as in my expeirence, kinda nobody ever looks at those (two weeks after you installed it).

I'm a fan of "searches" and alerts. As in searching "how many requests did we get this month that ended in 5xx" (ok that would be a good dashboard :p) and alerts like "we are getting to many HTTP 5xx".

By the way, dont discount LLMs here! They are GREAT at understanding logfiles! Asking literally in human language "how many HTTP 5xx did we have this month" is VERY comfortable!
 
By the way, dont discount LLMs here! They are GREAT at understanding logfiles!
Sorry but what are LLM's?

I'm generally not a fan of dashboards as in my expeirence, kinda nobody ever looks at those (two weeks after you installed it).
I believe you, can imagine myself in same position!
this is good reason not to pursuit for log dashboards lol

---

I'm going to mark this as solved, even though I wanted UI dashboard style program or any UI I pretty much like the lnav suggested by @atanere even though it's CLI program.
The features it provides are far beyond what I'll ever need.

ksystemlog is nice too but sadly since it needs to be run as sudo that means the UI is white because my dark theme is ignored because theme is per user and I'm not root
Otherwise it's OK.

Thank you guys for all suggestions!
 
Sorry but what are LLM's?

Large language models, like chatgpt or ollama - here is how you install it on your workstation / server to try it out:

Code:
curl -fsSL https://ollama.com/install.sh | sh
ollama run dolphin-mixtral
paste a bunch of logs into it and ask it questions in human language. or google how to use that in a program you write, like in python3 or so.

this is good reason not to pursuit for log dashboards lol

Nah go for it. Its nice to have and not need it ;) Better than the other way around :p
 
paste a bunch of logs into it and ask it questions in human language. or google how to use that in a program you write, like in python3 or so.
While this looks attractive and cool I'm not sure how wise it would be to put logs into a cloud service.
But I have an idea, the lnav has options to anonymize log entries, so after doing that it should be safe to use the cloud.
 
not sure how wise it would be to put logs into a cloud service.

There is no sentence that is wise and that ends with "put into a cloud service".

I'm not even sure if there is a cloud service to stream journalctl logs to. Most of them want you to install their weird java app on your servers to fish out the logs for you. As a Sysadmin, that always seems a bit fishy to me x)

You have neglected to check out the ollama thing I posted, its this: https://ollama.com/

Its not a cloud service but a large language model (something like chatgpt) that runs on YOUR server (or workstation). You should try it, you will like it, trust me :p The example I posted above downloads a 26 GB LLM that is trained for coding. You can ask it lots of things about "Linux" as well. Here is an example from my workstation - the stuff after the >>> is what I asked it to do:

Code:
>>> how do I execute SHOW TABLES; on a remote mariadb server in one command from my workstation using SSH?
 To execute the "SHOW TABLES;" command on a remote MariaDB server via SSH in one command, you can use the following syntax:

---bash
ssh -p <port> <username>@<hostname> 'mariadb -u<db_username> -p<db_password> -e "SHOW TABLES;"'
---

Replace the placeholders with actual values:
- `<port>`: The SSH port number (default is 22)
- `<username>`: Your workstation's username or the SSH username if it differs.
- `<hostname>`: The remote MariaDB server's hostname or IP address.
- `<db_username>`: Your MariaDB database username.
- `<db_password>`: Your MariaDB database password.

For example, if your SSH port is 2222, your workstation username is "john", the remote MariaDB server hostname is "example.com", your MariaDB database username is "dblogin", and your MariaDB database password is "securepassword", you
would use the following command:

---bash
ssh -p 2222 [email protected] 'mariadb -udblogin -psecurepassword -e "SHOW TABLES;"'
---

Keep in mind that this command will display sensitive information, such as the MariaDB database username and password, in your terminal. If you prefer not to do this, consider using a configuration file or creating an SSH tunnel 
instead.

It took it about one minute to generate this answer on my core i9-something thinkpad. Yes, it does run on a CPU in my case which makes it "slow" (one minute is fair and usable imho, but if you have a gaming PC, thats what you want to run this on preferably).

Long story short, do not put ANY of your data into the cloud. Especially not AWS or google cloud :p Do everything using FOSS on Debian servers ;)


Thats cool, I didn't know it, and it speaks systemd-journal too, thanks for sharing :)
 
Do everything using FOSS on Debian servers
lol that's what's I'm already doing hehe.

You have neglected to check out the ollama thing I posted, its this: https://ollama.com/
Indeed I did, I only now see it's project is on GitHub, nice!
I'll consider it if I stuck interpreting myself.

Thats cool, I didn't know it, and it speaks systemd-journal too, thanks for sharing :)
yup, I really like it, it's very good to inspect firewall logs because you can filter-in based on regex, compared to direct reading of firewall logs lnav is starship enterprise.
 
Here is another example - first line is a log line that I copied out of journalctl -f from my workstation and then I just asked it "what is this?" xD

The answer is quite acceptable I think. Note the last line that explains the (cryptic) log line in human understandable terms.

Code:
>>> """
... Apr 15 21:11:53 host ollama[608101]: {"function":"update_slots","level":"INFO","line":1627,"msg":"slot released","n_cache_tokens":406,"n_ctx":2048,"n_past":405,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"139644511430336","timest
... amp":1713215513,"truncated":false}
...
... what is this?
... """
This log message is likely from a server running the Ollama AI system. The message appears to be JSON-formatted and includes various details about an event that occurred, such as:

- `function`: "update_slots"
- `level`: "INFO"
- `line`: 1627
- `msg`: "slot released"
- `n_cache_tokens`: 406
- `n_ctx`: 2048
- `n_past`: 405
- `n_system_tokens`: 0
- `slot_id`: 0
- `task_id`: 0
- `tid`: "139644511430336"
- `timestamp`: 1713215513
- `truncated`: false

The message seems to indicate that a slot (possibly related to AI model execution) has been released, and it provides some statistics or metrics associated with this event. However, without more context or information about the
Ollama system, it's difficult to determine the exact meaning of each field in the log message.

Thats what you might want I think - a human readable answer to "how many http 5xx did we have this month
 
yup, I really like it, it's very good to inspect firewall logs because you can filter-in based on regex, compared to direct reading of firewall logs lnav is starship enterprise.

I'm not sure how much longer this will be a thing. I'm busy with non Linux related nonsense atm, but as soon as I have a slot free again I'll invest time into automating things away with LLMs. Logs will be one of the first of those.

The time where I "look at logs" have to come to an end :p
 
The time where I "look at logs" have to come to an end :p
What I would like to have next is some sort of a cron job or something which would trigger a message on my desktop when certain regexed term appears in any of the logs on system.

That would be useful to subscribe to suspicious network traffic and to be able to immediately respond.
 

Members online


Top