I have a log of 55GB in size.
I tried:
cat logfile.log | tail
But this approach takes a lot of time. Is there any way to read huge files faster or any other approach?
cat logifle.log | … here is superfluous, and actually contributes to it being slow. tail logfile.log without the cat would make a lot more sense!
It's going to be much faster, because when the input is not seekable, what tail needs to do is read all the standard input, line by line, keep the last 10 lines around in a buffer (in case they should be the last 10 lines); and having the input come from a pipe through your cat mechanism makes sure it's not seekable.
That is slow, and unless a line in your file can have gigabytes in size, pretty stupid: just skip the first 54.9 GB. The remaining 100 MB will certainly not be less than the last 10 lines! And getting the last 10 lines from 100 MB should be fast enough.
tail --bytes 100M logfile.log | tail
However, if you're using GNU Coreutil¹'s tail implementation, that already does this (i.e., it seeks to the end of the file minus 2.5 kB, and looks from there). By not abusing cat here but letting tail read the file itself (or just using redirection, works the same!) instead, you get a much faster result.
¹ GNU Coreutils, modern busybox are the two implementations of tail that I've checked; both do this. Stéphane points out below that even the original 1970s PWB Unix implementation does this – but it's still merely an implementation detail.
tail. All tail implementations do that, including the original implementation in PWB Unix from the 70s. It's the tail --bytes 100M that is GNU-specific.
tail logfile.log is sufficient.
tail ... < file.txt, then tail can't get the filename, it only gets an open file descriptor (and anyway, there might not be an unambiguous filename, since there could be multiple hard links to the file), but it's also not just a stream, but a proper open file descriptor to the file, like any other you get when opening a file. And files can be seeked. It's not a question of the platform, but that of a file vs. a pipe.
You should use directly tail logfile to get the last ten lines of the file without reading all the file as cat logfile | tail is doing.
tail -nX path to your log file
For X use the number of lines you wish to read.
Example
tail -n30 /var/log/syslog
Shows the last 30 lines of my /var/log/syslog
tail does.
cat | tail would suggest otherwise.
tail the path to the file (instead of using it as a filter reading from stdin) lets it seek instead of reading through the file from the start. The other answers do explain that, especially Marcus's.
You can use tac, which is cat backwards. But don't, because it's hard to limit it to 10 lines. Use tail instead because that's what it's for.
tac FILE | head -n10 | tac would give the right result, but it's needlessly complicated and slow when tail FILE would do the job. But your answer doesn't explain why tail FILE works as OP wanted and tail < FILE doesn't.
tail -1000 logfile.log > last_1k_lines_logfile.logwhich puts it in line with @MarcusMuller's answer.catto send the contents of a single file to a command via a pipe, you are almost always wrong. The only times you should be usingcatto feed a pipe is if you need to actually concatenate multiple files to operate on all of them (relatively rare these days) or the command you’re piping things to truly doesn’t support opening a file directly (also relatively rare these days).cataward