Why quicksort is called quick




















It only takes a minute to sign up. Connect and share knowledge within a single location that is structured and easy to search. The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"?

Sure, it's "quick", most of the time, but not always. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O n log n aren't generally called Quicksort anymore.

I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it usually is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc.

They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice.

Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first".

Mergesort was developed 15 years before Quicksort. I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing? In research on sorting algorithms wasn't as far advanced as today and the computer scientist Tony Hoare found a new algorithm which was quicker than the other so he published a paper called Quicksort and as the paper was quoted the title stayed.

A description is given of a new method of sorting in the random-access store of a computer. The method compares very favourably with other known methods in speed, in economy of storage, and in ease of programming. Certain refinements of the method, which may be useful in the optimization of inner loops, are described in the second part of the paper. Conquer by recursively sorting the subarrays array[d.. Combine is actually nothing. Once the conquer step recursively sorts, we have already completed the task.

All elements to the left of the pivot, in array[d.. That means the elements in array[d.. Then partition: rearrange the array such that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it.

The pivot element will be in its final position after it. After that recursively apply the above steps to the sub-array of elements on the left side of pivot and on the right side of the pivot. Worst Case O n2 - The worst case occurs when the partition process always picks greatest or smallest element as pivot.

If we consider above partition strategy where the last element is always picked as pivot, the worst case would occur when the array is already sorted in increasing or decreasing order. Best-case [ O n log n in simple partition or O n in three-way partition and equal keys] - The best case occurs when the partition process always picks the middle element as pivot.

In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only log2 n nested calls before we reach a list of size 1. This means that the depth of the call tree is log2 n. But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O n time all together that is each call has some constant overhead, but since there are only O n calls at each level, this is subsumed in the O n factor.

The result is that the algorithm uses only O n log n time. Worst-case space complexity O n auxiliary naive O log n auxiliary Sedgewick - The space used by quicksort depends on the version used. The in-place version of quicksort has a space complexity of O log n , even in the worst case, when it is carefully implemented using the following strategies - in-place partitioning is used.

This unstable partition requires O 1 space. After partitioning, the partition with the fewest elements is recursively sorted first, requiring at most O log n space.

Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. And keeps the stack depth bounded by O log n.

Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call.

Since the best case makes at most O log n nested recursive calls, it uses O log n space. To add your comment please Login or Register. We use cookies to improve your experience on our site and to show you personalised advertising.

Please read our cookie policy and privacy policy. To avoid this verification in future, please log in or register. If Quicksort was the fastest, wouldn't it be called Quickest-Sort? Your comment on this answer: Email me at this address if a comment is added after mine: Email me if a comment is added after mine Privacy: Your email address will only be used for sending these notifications. Your answer. Privacy: Your email address will only be used for sending these notifications.

Similar Questions. QuickSort and Randomized Quicksort.



0コメント

  • 1000 / 1000