I can't think of any light-weight solutions if those still fail, but linux-audit can globally trace syscalls, which seems a fair bit more direct than re-capturing and analyzing your own network traffic. Assuming that you can control the startup of the programs you wish to observe and that they won't do trickery behind your back, of course. If that's important to you, you might go with a LD_PRELOAD interception library that tracks socket operations. This does not differentiate (for example) disk I/O versus network I/O. You only need to set up and receive messages on a single socket. This interface lets you monitor CPU, memory, and I/O usage by processes of your choosing. extensibility for use by future accounting patches. unified interface for multiple accounting subsystems.It has a web interface for viewing but not as far as I know for configuring. efficiently provide statistics during lifetime of a task and on its exit I like munin - pretty much with just installation (munin-node on each host and munin 'master' on the collecting and graphing server), and pointing to the hosts, I got full detail on hardware sensors, cpu, disks, memory, interrupts, lots more.Taskstats was designed for the following benefits: Per-process statistics from the kernel to userspace. Taskstats is a netlink-based interface for sending per-task and usr/src/linux/Documentation/accounting/taskstats.txt The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process.ĭo you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems? Not to mention the parsing involved when reading these files.Īnother problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. It is very easy to make collectl work as the top utility, just run the following command in your terminal collectl -top and you will see the similar output the top tool gives you when it is executed in your Linux system. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). I want to write a daemon in C++ that does this monitoring for some given PIDs. I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux.
0 Comments
Leave a Reply. |