Yeah this page is cool, and it shows that this “naive sort” (custom) is close but not identical to insertion sort, which is mentioned at the end of the paper.

And it also shows that it’s totally different than bubble sort.

You have to click the right box and then find the tiny “start” button, but it’s useful.

I recommend clicking these four boxes:

quick sort

merge sort

custom sort (this naive algorithm)

bubble sort

Then click “start”.

And then you can clearly see the difference side by side, including the speed of the sort!

Quick sort is quick! Merge sort is better in the worst case but slower on this random data.

I thought (this is 15 year old memories) that what made merge sort nice is that it isn’t particularly dependent on the data, so the performance isn’t really affected if the data is nicely randomized or partially sorted or whatever, whereas quicksort’s performance does depend to some extent on properties of the input sequence (usually to its benefit, but occasionally to its detriment).

If you are playing with this website: when you change your selected sorts, press “stop” before you press “start” again. Otherwise both sorts will run at the same time, undoing each other’s work, and you will wind up with some freaky patterns.

This comment is brought to you by “wow I guess I have absolutely no idea how radix sort works.”

The intuition is if you have to sort 1 million numbers, BUT you know that they’re all from 1 to 10. What’s the fastest way of sorting?

Well you can do it in guaranteed linear time if you just create an array of 10 “buckets”. Then make a single pass through the array, and then increment a counter in the corresponding bucket.

After that, print out each number for the number of times it appears in its bucket, like

[ 2 5 1 ... ] ->
1 1 2 2 2 2 2 3 ...

etc.

I think that lines up with the visualization because you get the “instant blocking” of identical colors. Each color is a value, like 1 to 10. (Watching it again, I think it’s done in 3 steps, like they consider the least significant digits first, then the middle digits, then the most significant digits. It’s still a finite number of distinct values.)

There are variations on this, but that’s the basic idea.

And it makes sense that it’s even faster than QuickSort when there are a limited number of distinct values. If you wanted to sort words in a text file, then Radix sort won’t work as well. There are too many distinct values.

It’s not a general purpose sorting algorithm, which is why it looks good here.

Oh, yeah — I meant that I started radix sort while the custom sort was still running, and it just kind of scrambled the colors insanely, and it took me a few minutes of thinking “dang and here I thought radix sort was pretty simple” before I realized they were both running at the same time :)

Interesting way to sort, kind of hard to prove it actually works. But still it feels like an very unoptimized bubble sort, bubbling all smaller numbers to xs[i]

Isn’t this Bubble Sort? As in, the O(n^2) sorting algorithm that everyone invents then discovers why it’s so terrible when they learn a little bit of complexity theory? I remember implementing this exact algorithm in OPL for the Psion Series 3 when I was 13. It was the first time I discovered that it’s important to think about algorithmic complexity (on a 3.84MHz 8086 clone, you don’t need a very large n for n^2 to be painfully slow!), thought I didn’t learn how to formally reason about these problems for some years later..

It’s not, and that’s fairly clear from the pseudocode in the paper and the wikipedia article you linked. The biggest difference, is that bubble sort iterates over the list repeatedly, comparing adjacent pairs of elements and swapping them if they’re out of order, until it’s able to iterate over the list without a single swap, i.e.

loop:
swapped = false
for i from 1 to length(xs):
if xs[i-1] > xs[i]:
swap(xs[i-1], xs[i])
swapped = true
if not swapped:
break

Whereas the version in the paper compares more than just adjacent elements, and holds no state outside of the array:

for i from 0 to length(xs):
for j from 0 to length(xs):
if xs[i] < xs[j]:
swap(xs[i], xs[j])

Huh, the Wikipedia article is indeed more complex. This article describes the algorithm that I’d implemented, that I was later told was a reinvention of the bubblesort. The animation on the wikipedia page looks exactly like the behaviour I’d expect from this algorithm.

The beginning of the paper talks about bubble sort, and why it’s different. And the end of the paper says it’s close but not identical to insertion sort. I recommend watching the animations mentioned elsewhere in the thread – it clears it up instantly.

Someone shared a visualization of the sorting algorithm on ycombinator news.

PS: Really don’t enable sound its loud and awful

Yeah this page is cool, and it shows that this “naive sort” (custom) is close but not identical to insertion sort, which is mentioned at the end of the paper.

And it also shows that it’s totally different than bubble sort.

You have to click the right box and then find the tiny “start” button, but it’s useful.

I recommend clicking these four boxes:

Then click “start”.

And then you can clearly see the difference side by side, including the speed of the sort!

Quick sort is quick! Merge sort is better in the worst case but slower on this random data.

cycle sort is pretty interesting too!

I thought (this is 15 year old memories) that what made merge sort nice is that it isn’t particularly dependent on the data, so the performance isn’t really affected if the data is nicely randomized or partially sorted or whatever, whereas quicksort’s performance does depend to some extent on properties of the input sequence (usually to its benefit, but occasionally to its detriment).

If you are playing with this website: when you change your selected sorts, press “stop” before you press “start” again. Otherwise both sorts will run at the same time, undoing each other’s work, and you will wind up with some freaky patterns.

This comment is brought to you by “wow I guess I have absolutely no idea how radix sort works.”

Yeah the radix sort visualization is cool!

The intuition is if you have to sort 1 million numbers, BUT you know that they’re all from 1 to 10. What’s the fastest way of sorting?

Well you can do it in guaranteed linear time if you just create an array of 10 “buckets”. Then make a single pass through the array, and then increment a counter in the corresponding bucket.

After that, print out each number for the number of times it appears in its bucket, like

etc.

I think that lines up with the visualization because you get the “instant blocking” of identical colors. Each color is a value, like 1 to 10. (Watching it again, I think it’s done in 3 steps, like they consider the least significant digits first, then the middle digits, then the most significant digits. It’s still a finite number of distinct values.)

There are variations on this, but that’s the basic idea.

And it makes sense that it’s even faster than QuickSort when there are a limited number of distinct values. If you wanted to sort words in a text file, then Radix sort won’t work as well. There are too many distinct values.

It’s not a general purpose sorting algorithm, which is why it looks good here.

Oh, yeah — I meant that I started radix sort while the custom sort was still running, and it just kind of scrambled the colors insanely, and it took me a few minutes of thinking “dang and here I thought radix sort was pretty simple” before I realized they were both running at the same time :)

Nice visualisation, though it does make some algorithms (selection sort) look better than they are!

Interesting way to sort, kind of hard to prove it actually works. But still it feels like an very unoptimized bubble sort, bubbling all smaller numbers to xs[i]

Isn’t this Bubble Sort? As in, the O(n^2) sorting algorithm that everyone invents then discovers why it’s so terrible when they learn a little bit of complexity theory? I remember implementing this exact algorithm in OPL for the Psion Series 3 when I was 13. It was the first time I discovered that it’s important to think about algorithmic complexity (on a 3.84MHz 8086 clone, you don’t need a very large

nforn^2to be painfully slow!), thought I didn’t learn how to formally reason about these problems for some years later..It’s not, and that’s fairly clear from the pseudocode in the paper and the wikipedia article you linked. The biggest difference, is that bubble sort iterates over the list repeatedly, comparing adjacent pairs of elements and swapping them if they’re out of order, until it’s able to iterate over the list without a single swap, i.e.

Whereas the version in the paper compares more than just adjacent elements, and holds no state outside of the array:

Huh, the Wikipedia article is indeed more complex. This article describes the algorithm that I’d implemented, that I was later told was a reinvention of the bubblesort. The animation on the wikipedia page looks exactly like the behaviour I’d expect from this algorithm.

Note that this algorithm appears to swap when the elements are in order, and has a non-trivial proof of correctness.

The beginning of the paper talks about bubble sort, and why it’s different. And the end of the paper says it’s close but not identical to insertion sort. I recommend watching the animations mentioned elsewhere in the thread – it clears it up instantly.

https://lobste.rs/s/gh1ngc/is_this_simplest_most_surprising_sorting#c_aadf4b