Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

6
  • 2
    Note that sorting of a very large file is not an issue per se with sort; it can sort files which are larger than the available RAM+swap. Perl, OTOH, will fail if there are only few duplicates. Commented Mar 6, 2009 at 11:06
  • 1
    Yes, it's a trade-off depending on the expected data. Perl is better for huge dataset with many duplicates (no disk-based storage required). Huge dataset with few duplicates should use sort (and disk storage). Small datasets can use either. Personally, I'd try Perl first, switch to sort if it fails. Commented Mar 6, 2009 at 11:33
  • Since sort only gives you a benefit if it has to swap to disk. Commented Mar 6, 2009 at 11:34
  • 5
    This is great when I want the first occurrence of every line. Sorting would break that. Commented May 10, 2012 at 19:30
  • Ultimately perl will be sorting the entries in some form to put into its dictionary (or whatever it is called in perl), so you can't actually avoid the processing time of a sort. Commented Aug 27, 2021 at 0:30