среда, 5 октября 2016 г.

grep and awk like SQL


easy to understand if map *nix console tools to SQL commands.Its usefull to analyse big files with strict stucture, like logs or dumps.

select * from file where column = "keyword"
cat ./bigfile.log | fgrep "keyword"

select sum(numbers) from file where column = "keyword" 
cat ./bigfile.log | fgrep "keyword" |  awk '{a[$1]+=$2}END{for (i in a){print i,a[i]}}'

select avg(numbers) from file where column = "keyword"
cat ./bigfile.log | fgrep "keyword"  |  awk '{a[$1]["s"]+=$2; a[$1]["c"]+=1;}END{for (i in a){ print i,a[i]["s"]/a[i]["c"]} }'

select count(*) from file group by column
cat ./bigfile.log | perl -pe "s/^(REGEXP_PATTERN).*$/\$1/g"  |  sort | uniq -c

with perl or awk analyzing columns can be extracted from source file for further calculating.

group by date and calculating any speed/avg

for example your log file begin with date: 11:29:37.495 DEBUG - <SOME INFO>
gouping by date can be done by cutting chars from end
seconds: cat ./bigfile.log | cut -c -8  | uniq -c
minutes: cat ./bigfile.log | cut -c -5  | uniq -c







Комментариев нет:

Отправить комментарий