Pulling info from various files



Hi, Im new to linux so please excuse any ignorance.

I have a folder with 562 log files. The log files contain info with a trace count which keeps increasing by 1 until approximately 22,000. What i need to do is search each file in the folder and pull out the file name and the final trace count. Is there any way this can be done without pulling the previous 21,999 trace counts in each file?

So you need to cat the last line of each one, then print a few parts of it?

for x in $(tail -n 1 /folder/*.log); do cat $x | awk '{print $2, $5}';done

So, if this works ... :)

Breaking it down:

tail -n 1
will print the last line of a file

cat $x | awk 'print $2, $5}'
will take the line and print the 2nd and 5th fields (using space as separator).
Its not the last line though, ex.

seg fault
seg fault
trace count 1
trace count 22000
seg fault
seg fault
How about..

for x in $(grep 'trace count' /folder/*.log|tail -n 1); do cat $x | awk '{print $2, $5}';done
Or - if the lines are that short and there's no need to awk things out of it..

grep 'trace count' /folder/*.log| tail -n 1
This did nothing? Each file has approx 160,000 lines
My syntax is probably wrong then - try looking into cat/grep/tail, etc..

Latest posts