- As @TobySpeight pointed out in a comment, the
catis redundant. Any time you have a line likecat "$file" | grep "$pattern"you can replace it withgrep "$pattern" "$file" - If your log entries are already ordered by time (at least for each user's login event), you can likely use
tacinstead ofsort -r - Can a user have
logged inbe their user ID? I assume not, but if they can, we have a bug where instead of sending their most recent login, you might send their most recent event of any kind (or something malformatted, might depend on how the rest of the log looks). Either way, if this turns out to be a problem it can easily be worked around by adding an anchor to thegrep's pattern, or usinggrep -x - TheI'm pretty sure the first two
awkcalls can be combined into a single one likeawk -F' ' '!seen[$4]++ { p = index($2, "+"); print $1 "T" substr($2, 1, p-1) "Z", $4 }' - By the way,
awklooks dense without spaces, it'd be nicer if you addedand putting some spaces between operators and delimiters are stuff makes it a lot easier to look at - Using
awkandcutto remove[and]seems a bit awkward. Assuming those characters can't appear inside user IDs (since that's how user IDs are delimited it'd be annoying if they could)tr -d '[]'should do the job just fine here - Having a long line like that is annoying, but it becomes less annoying if it's split between multiple physical lines - which. Which is possible, see below
- Personally, I think it'd look better if you were to pipe your data into the
whileinstead of sending it in from behind using< <(). It's a matter of opinion, but to me it better represents the flow of data and also subjectively looks nicer - I imagine it might be useful to parse log files like this regardless of what they're called. Hardcoding the name
file.logmakes that harder. One alternative could be taking the name as a command line parameter and begin with something likegrep -x "$TODAY.*logged in" "$1". But doing command line argument parsing properly is a pain, not doing it properly makes for some really awkward-to-use scripts unless you remember how you wrote them in the first place, and pipes are pretty neat, so maybe we can get away with just operating on/dev/stdininstead could be an option - you'd have. Granted, having to call the script asit like./submit-logins < file.log, which is slightly more annoying (especially if you aren't doing it from the command line), but it's a possibilityan option
- As @TobySpeight pointed out in a comment, the
catis redundant. Any time you have a line likecat "$file" | grep "$pattern"you can replace it withgrep "$pattern" "$file" - If your log entries are already ordered by time (at least for each user's login event), you can likely use
tacinstead ofsort -r - Can a user have
logged inbe their user ID? I assume not, but if they can, we have a bug where instead of sending their most recent login, you might send their most recent event of any kind (or something malformatted, might depend on how the rest of the log looks). Either way, if this turns out to be a problem it can easily be worked around by adding an anchor to thegrep's pattern, or usinggrep -x - The first two
awkcalls can be combined into a singleawk -F' ' '!seen[$4]++ { p = index($2, "+"); print $1 "T" substr($2, 1, p-1) "Z", $4 }' - By the way,
awklooks dense without spaces, it'd be nicer if you added some - Using
awkandcutto remove[and]seems a bit awkward. Assuming those characters can't appear inside user IDs (since that's how user IDs are delimited it'd be annoying if they could)tr -d '[]'should do the job just fine - Having a long line like that is annoying, but it becomes less annoying if it's split between multiple physical lines - which is possible
- Personally, I think it'd look better if you were to pipe your data into the
whileinstead of sending it in from behind using< <(). It's a matter of opinion, but to me it better represents the flow of data and also subjectively looks nicer - I imagine it might be useful to parse log files like this regardless of what they're called. Hardcoding the name
file.logmakes that harder. One alternative could be taking the name as a command line parameter and begin with something likegrep -x "$TODAY.*logged in" "$1". But doing command line argument parsing properly is a pain, and pipes are pretty neat, so just operating on/dev/stdininstead could be an option - you'd have to call the script as./submit-logins < file.log, which is slightly more annoying (especially if you aren't doing it from the command line), but it's a possibility
- As @TobySpeight pointed out in a comment, the
catis redundant. Any time you have a line likecat "$file" | grep "$pattern"you can replace it withgrep "$pattern" "$file" - If your log entries are already ordered by time (at least for each user's login event), you can likely use
tacinstead ofsort -r - Can a user have
logged inbe their user ID? I assume not, but if they can, we have a bug where instead of sending their most recent login, you might send their most recent event of any kind (or something malformatted, might depend on how the rest of the log looks). Either way, if this turns out to be a problem it can easily be worked around by adding an anchor to thegrep's pattern, or usinggrep -x - I'm pretty sure the first two
awkcalls can be combined into a single one likeawk -F' ' '!seen[$4]++ { p = index($2, "+"); print $1 "T" substr($2, 1, p-1) "Z", $4 }' - By the way,
awklooks dense without spaces, and putting some spaces between operators and delimiters are stuff makes it a lot easier to look at - Using
awkandcutto remove[and]seems a bit awkward. Assuming those characters can't appear inside user IDs (since that's how user IDs are delimited it'd be annoying if they could)tr -d '[]'should do the job just fine here - Having a long line like that is annoying, but it becomes less annoying if it's split between multiple physical lines. Which is possible, see below
- Personally, I think it'd look better if you were to pipe your data into the
whileinstead of sending it in from behind using< <(). It's a matter of opinion, but to me it better represents the flow of data and also subjectively looks nicer - I imagine it might be useful to parse log files like this regardless of what they're called. Hardcoding the name
file.logmakes that harder. One alternative could be taking the name as a command line parameter and begin with something likegrep -x "$TODAY.*logged in" "$1". But doing command line argument parsing properly is a pain, not doing it properly makes for some really awkward-to-use scripts unless you remember how you wrote them in the first place, and pipes are pretty neat, so maybe we can get away with just operating on/dev/stdininstead. Granted, having to call it like./submit-logins < file.logis slightly more annoying (especially if you aren't doing it from the command line), but it's an option
- As @TobySpeight pointed out in a comment, the
catis redundant. Any time you have a line likecat "$file" | grep "$pattern"you can replace it withgrep "$pattern" "$file" - If your log entries are already ordered by time (at least for each user's login event), you can likely use
tacinstead ofsort -r - Can a user have
logged inbe their user ID? I assume not, but if they can, we have a bug where instead of sending their most recent login, you might send their most recent event of any kind (or something malformatted, might depend on how the rest of the log looks). Either way, if this turns out to be a problem it can easily be worked around by adding an anchor to thegrep's pattern, or usinggrep -x - The first two
awkcalls can be combined into a singleawk -F' ' '!seen[$4]++ { p = index($2, "+"); print $1 "T" substr($2, 1, p-1) "Z", $4 }' - By the way,
awklooks dense without spaces, it'd be nicer if you added some - Using
awkandcutto remove[and]seems a bit awkward. Assuming those characters can't appear inside user IDs (since that's how user IDs are delimited it'd be annoying if they could)tr -d '[]'should do the job just fine - Having a long line like that is annoying, but it becomes less annoying if it's split between multiple physical lines - which is possible
- Personally, I think it'd look better if you were to pipe your data into the
whileinstead of sending it in from behind using< <(). It's a matter of opinion, but to me it better represents the flow of data and also subjectively looks nicer - I imagine it might be useful to parse log files like this regardless of what they're called. Hardcoding the name
file.logmakes that harder. One alternative could be taking the name as a command line parameter and begin with something likegrep -x "$TODAY.*logged in" "$1". But doing command line argument parsing properly is a pain, and pipes are pretty neat, so just operating on/dev/stdininstead could be an option - you'd have to call the script as./submit-logins < file.log, which is slightly more annoying (especially if you aren't doing it from the command line), but it's a possibility
All in all, I'd probably recommend something like
#!/bin/bash
SERVER_API_URL="http://localhost:8080/api/user"
TODAY=$(date +'%Y-%m-%d')
grep -x "$TODAY.*logged in" /dev/stdin |
tac |
awk -F' ' '!seen[$4]++ { p = index($2, "+"); print $1 "T" substr($2, 1, p-1) "Z", $4 }' |
tr -d '[]' |
while read -r login_time user_id; do
# following line is wrapped in echo for testing
echo "curl -X PUT $SERVER_API_URL/$user_id/lastLoginTime/$login_time"
done
```
lang-bash