0

Wikipedia and other textbooks reference binary selection sort's asymptotic worst-case time complexity to be O(n^2), usually justifying this with any extra computation caused by swaps. I don't understand how this is derived

Pseudocode is below, assume binarysearch(array, n , lo , hi) -> index of placement

   input arr
   mid := arr[(len(arr) -1 )//2]
   sortarr := [mid]
   for each val in arr:
        index := binarysearch(sortarr,val,0,len(sortarr))
        insert val into sortarr at index
   return sortarr

The time complexity of binarysearch() is indiscriminately log(n). The for loop will iterate n times, hence you would expect the worst-case time complexity to be O(nlogn). What am I missing?

2
  • That isn't selection sort. Please fix the name or the algorithm. Commented Oct 14 at 12:05
  • That's not selection sort. What you're showing is an insertion sort that uses a second array for the output. And unless I'm reading it wrong, the output array will end up with a duplicated value from arr[mid]. Commented Oct 16 at 1:41

1 Answer 1

6

It is true that the time complexity of binary search is O(log n), but inserting into an array at an arbitrary position can take linear time in the size of the array, as all elements after the insertion point need to be shifted.

Sign up to request clarification or add additional context in comments.

6 Comments

How did you make this answer anonymous?
I didn't; I think that's a visual bug.
Would you be able to mitigate this theoretically O(n) reshuffling with the usage of a min/max heap?
* ig this would make it closer to heap sort instead
Yes, that would essentially be a heapsort.
@PostalCat Heapsort would btw be closer to selection sort than your question's "selection sort" is. (Why are you not fixing the question?)

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.