I am trying to use the following pattern on Ubuntu:
grep -Eri "warning|error|critical|severe|fatal" --color=auto
to find relevant errors in many different .log files recursively in /var/log and its subfolders.
The issue I am having is that this results in tens of thousands of lines of matches being printed as the expression is run. I'd like to filter these somehow in at least one of the following ways:
- Print but then skip a match if more than eg. 3 of the same match exist
- Show only unique matches (i.e. print one of each line found)
Can I do this by piping the output to something? Currently going through each log for errors is incredibly time consuming which is why I am trying this. But the expression I am using prints so much info that it is also not usable itself either. I have tried piping to 'less' but that removes highlighting which makes it harder to read and does not fix the issue with the output being so large.
I realise I could also limit the expression to specific files at a time, but as I mentioned some logs are full of matches and others have very little. So further filtering out duplicates would be really helpful.
Here is an example error line in one of the many logs I am searching:
./artifactory/artifactory-service.log:20:2021-07-20T08:45:30.248Z [jfrt ] [ERROR] [.j.a.c.g.GrpcStreamObserver:97] [c-default-executor-1] - refreshing affected platform config stream - got an error
If there are hundreds of such errors, I would like to show eg. at most 3 of these before moving onto the next match.
Alternatively, due to how the dates are listed in the log. It would be great to filter to match for only specific dates, how would I go about doing this? Date filtering would limit the output greatly.
find . -name '*.log' -exec grep -Ein 'warning|warn|error|critical|severe|fatal' --color=auto {} +but it does not help with limiting the scope to specific matching dates, as I am unsure how to include that in the search string.sort | uniq