# arrays - Quicksort partition why random pivot?

Suppose I have 2 cases:

Case 1: I always choose the 1st element as pivot. In this case the worst case O(n2) is when the array is already sorted or reverse sorted.

Case 2: I choose a random element as pivot. Here worst case O(n2) is possible when the random pivot is always the max or the min element in the subarray.

Can't I argue that if we are given a Random array, P(O(n2) in Case 1) = P(O(n2) in case 2). Because intuitively P(sorted array or reverse sorted array) = P(random pivot is always the max or the min element in the subarray)?

If so, how is the 2nd case any good because we need extra effort to select random pivot? We need 2nd case only when the data would be following a certain pattern. Am I right? Please enlighten.

When all permutations of the input are equally likely, the probability of every time choosing a bad pivot is the same for both strategies (first or random). It would be the same for any strategy that makes no comparison (middle, third, alternating between beforelast and second...).

(This might be different for a strategy that compares elements, such as median-of-three.)

But the truth is that in practice, the permutations aren't equiprobable at all and there is a strong bias toward the nearly sorted sequences.

Said differently, when the input is well shuffled or when you choose the pivot randomly, you must be very unlucky to do a bad drawing every time and the probability of the worst-case is infinitesimal. For a sorted sequences, odds are quite different as you are sure to lose every time !

As a side note, picking a random value indeed has a cost, which is not neglectible compared to the partitioning of small sequences. This is why it matters to switch to a more straightforward sort for sequences of length L or less, and tune the value of L to optimal.

To avoid the worst case, you have to choose the optimal pivot for each subdivision: the median element. If you use any short-cut method to select the pivot, like random, first, median-of-three or whatever, then the possibility is there of encountering the worst case. It's just a question of probabilities.

Certain input cases are likely to occur, at least in some applications, such as the case when the elements are already sorted or nearly sorted.

If there is going to be the threat of worst-case behavior, then it is good to at least mitigate the threat by preventing input cases which are likely from triggering that worst case behavior.

By picking predictable element, like first element, you could easily hit the worst case. If a grain of randomness is added, then the pattern will likely be broken and at some point actual running time of the sorting algorithm will be lower than O(N^2).

On a related note, random picking the pivot is not that good idea either. There are techniques, such as median of medians, which are coming with a proof that worst case running time will still be O(NlogN). That is huge advantage over taking the first element as the pivot.

You can refer to this article for implementation based on median of medians: Finding Kth Smallest Element in an Unsorted Array

We're not worried about the runtime for when we're given a randomly-generated array. We're worried about the runtime when the array is sorted or near-sorted, which is actually pretty common. We're also worried about the runtime when the array is generated by an adversary with elements specifically chosen to ruin our day.

If it were just about random input, picking the first element as the pivot would be fine.