Selection Sort Python Implementations

Selection Sort is a straightforward comparison-based sorting algorithm that divides an array into sorted and unsorted regions, repeatedly selecting the smallest element from the unsorted region and placing it at the end of the sorted region. While not ideal for large datasets due to its quadratic time complexity, its simplicity and in-place operation make it a valuable learning tool. This article explores three Python implementations of Selection Sort, from a basic version to optimized variants, with detailed code, outputs, explanations, and complexity analyses.


1. Basic Selection Sort

def selectionSortBasic(arr):
    n = len(arr)
    for i in range(n):                         # iterate over sorted region
        min_idx = i                            # assume current index has minimum
        for j in range(i + 1, n):              # search unsorted region
            if arr[j] < arr[min_idx]:
                min_idx = j                    # update minimum index
        arr[i], arr[min_idx] = arr[min_idx], arr[i]  # swap with minimum
		

array = [50, 20, 40, 10, 30]
result = selectionSortBasic(array)
print(f"Result: {array}")

Output

Result: [10, 20, 30, 40, 50]

Explanation

The basic Selection Sort divides the array into a sorted region (initially empty) and an unsorted region. In each iteration, it finds the smallest element in the unsorted region (from index i + 1 to n - 1) and swaps it with the first element of the unsorted region (at index i). This process continues until the entire array is sorted. The algorithm always performs all comparisons, even if the array is already sorted, making it consistent but inefficient in terms of writes.

Time and Space Complexity

  • Worst-case: \( \Theta(n^2) \) – occurs when the array is reverse-sorted, requiring full comparisons for each position.
  • Average-case: \( \Theta(n^2) \) – typical for random inputs, as all elements in the unsorted region are compared.
  • Best-case: \( \Theta(n^2) \) – even for a sorted array, the algorithm performs all comparisons, though swaps may be minimal.
  • Space: \( \Theta(1) \) – in-place sorting with only a constant amount of extra memory for variables.

2. Selection Sort with Skip-Unnecessary-Swap

def selectionSortSkipUnnecessarySwap(arr):
    n = len(arr)
    for i in range(n):
        min_idx = i                            # assume current index has minimum
        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j                    # update minimum index
        if min_idx != i:                       # skip swap if already in place
            arr[i], arr[min_idx] = arr[min_idx], arr[i]


array = [50, 20, 40, 10, 30]
result = selectionSortSkipUnnecessarySwap(array)
print(f"Result: {array}")

Output

Result: [10, 20, 30, 40, 50]

Explanation

This optimized version introduces a check to skip unnecessary swaps. If the smallest element in the unsorted region is already at the correct position (i.e., min_idx == i), the swap is skipped. This reduces the number of write operations, which is particularly beneficial for hardware like flash memory where writes are costly. While the number of comparisons remains the same, fewer swaps improve practical performance in certain contexts.

Time and Space Complexity

  • Worst-case: \( \Theta(n^2) \) – same as the basic version, as all comparisons are still performed.
  • Average-case: \( \Theta(n^2) \) – comparisons remain unchanged, but fewer swaps occur for partially sorted arrays.
  • Best-case: \( \Theta(n^2) \) – comparisons are still performed, but no swaps are needed if the array is sorted.
  • Space: \( \Theta(1) \) – in-place sorting with only a constant amount of extra memory.

3. Dual Selection Sort (Min-Max)

def selectionSortDualMinMax(arr):
    n = len(arr)
    for i in range(n // 2):                    # iterate up to half the array
        min_idx = i                            # track minimum
        max_idx = i                            # track maximum
        for j in range(i + 1, n - i):          # search unsorted region
            if arr[j] < arr[min_idx]:
                min_idx = j                    # update minimum index
            if arr[j] > arr[max_idx]:
                max_idx = j                    # update maximum index
        arr[i], arr[min_idx] = arr[min_idx], arr[i]  # place minimum at start
        if max_idx == i:                       # adjust max_idx if minimum was at i
            max_idx = min_idx
        arr[n - 1 - i], arr[max_idx] = arr[max_idx], arr[n - 1 - i]  # place maximum at end
		
array = [50, 20, 40, 10, 30]
result = selectionSortDualMinMax(array)
print(f"Result: {array}")

Output

Result: [10, 20, 30, 40, 50]

Explanation

The Dual Selection Sort optimizes by finding both the minimum and maximum elements in each pass, placing them at the start and end of the unsorted region, respectively. This effectively halves the number of passes (to n/2), as each pass sorts two elements. The algorithm adjusts the max_idx if the minimum was at the initial position to ensure correctness. This version reduces the number of iterations, making it more efficient in practice, especially for partially sorted arrays.

Time and Space Complexity

  • Worst-case: \( \Theta(n^2) \) – comparisons are still quadratic, though fewer passes are needed.
  • Average-case: \( \Theta(n^2) \) – fewer iterations reduce constant factors, improving practical performance.
  • Best-case: \( \Theta(n^2) \) – comparisons remain, but fewer passes and swaps occur for sorted arrays.
  • Space: \( \Theta(1) \) – in-place sorting with only a few extra variables.

Conclusion

The three Selection Sort implementations demonstrate a progression in optimization. The Basic Selection Sort is simple but performs unnecessary swaps. The Skip-Unnecessary-Swap version reduces write operations, improving efficiency for write-sensitive environments. The Dual Selection Sort further optimizes by sorting both ends of the array in each pass, halving the number of iterations. Despite their quadratic time complexity, these optimizations make Selection Sort more practical for specific use cases, such as small datasets or systems where minimizing writes is critical. For large datasets, more efficient algorithms like Quick Sort or Merge Sort are recommended, but Selection Sort remains a valuable educational tool for understanding sorting fundamentals.

Leave a comment