Every sort algorithm has a worst case, and in almost every casemany cases the worst case is really bad so it is worth testing for it. The problem is, there is no single worst case just because you know the basic algorithm.
Common worst cases include: already sorted; sorted in reverse; nearly sorted, one out of order element; all values the same; all the same except first (or last) is higher (or lower). We once had a sort where the worst case was a particular sawtooth pattern, which was very hard to predict but quite common in practice.
The worst case for quicksort is one that gets it to always pick the worst possible pivot, so that one of the partitions has only a single element. If the pivot is the first element (bad choice) then already sorted or inverse sorted data is the worst case. For a median-of-three pivot data that is all the same or just the first or last is different does the trick.
For quicksort the average complexity is nlogn and worst case is n^2. The reason it's worth triggering worst case behaviour is because this is also the case that produces the greatest depth of recursion. For a naive implementation the recursion depth could be n, which may trigger stack overflow. Testing other extreme situations (including best case) may be worthwhile for similar reasons.