Process and Analyze Log Files with awk/sed/grep
Intermediate12 minTrending
Build a log processing pipeline to extract, filter, aggregate, and summarize information from application and server logs.
Prerequisites
- -Bash with GNU coreutils
- -Sample log files to process
Steps
1
Filter log lines by level or pattern
Use grep to extract only error or warning lines from a log file.
$ grep -E '(ERROR|WARN)' /var/log/app.log | tail -20
Use grep -i for case-insensitive matching or grep -c to count matches.
2
Extract specific fields with awk
Parse structured log lines to extract timestamps, status codes, or response times.
$ awk '{print $1, $2, $NF}' /var/log/access.log | head -20
$1 is the first field, $NF is the last field. Change the delimiter with -F: awk -F',' for CSV logs.
3
Count occurrences and find top errors
Aggregate log entries to find the most frequent errors or status codes.
$ grep 'ERROR' /var/log/app.log | awk '{print $NF}' | sort | uniq -c | sort -rn | head -10
4
Extract log entries in a time range
Filter logs between two timestamps using awk comparison.
$ awk '$1" "$2 >= "2026-03-11 10:00:00" && $1" "$2 <= "2026-03-11 12:00:00"' /var/log/app.log
5
Generate a summary report
Create a one-liner that produces a complete log summary with counts by level.
$ awk '/ERROR/{e++} /WARN/{w++} /INFO/{i++} END{printf "Errors: %d\nWarnings: %d\nInfo: %d\nTotal: %d\n", e, w, i, NR}' /var/log/app.log
Full Script
FAQ
Discussion
Loading comments...