If this is anything like Schönhage-Strassen, this only is effective for extremely large numbers. For medium-sized to large numbers, Karatsuba’s method tends to be faster because it has less overhead (not to mention that it’s incredibly simple to understand while these FFT-based methods are pretty convoluted, at least in my opinion).

The Harvey-van der Hoeven algorithm is mostly of academic interest. From Wikipedia:

The algorithm is not considered practically useful, as its advantages only appear when multiplying extremely large numbers (having more than 2^1729^12 bits).

I do wonder if Fürer’s algorithm would be useful for numbers having millions or billions of bits.

Sublime understatement! If the universe has 10^80 (about 2^266) atoms like they say, then we’re going to have a hard time even representing one such number, let alone two to rub together.

Much respect to them, and I wish them success with their proofs, but what these mathematicians and their article-writing promoters mean by “perfect” and “most efficient” is not quite what us earth-bound computer folks mean.

Yeah I found the article really confusing - it seems to pretty clearly say that there’s no proof that this is the best possible (though it sounds like there are some indications that it may be) and then ends with “Regardless of what computers look like in the future, Harvey and van der Hoeven’s algorithm will still be the most efficient way to multiply.”

If this is anything like Schönhage-Strassen, this only is effective for extremely large numbers. For medium-sized to large numbers, Karatsuba’s method tends to be faster because it has less overhead (not to mention that it’s incredibly simple to understand while these FFT-based methods are pretty convoluted, at least in my opinion).

I see what you did there.

The Harvey-van der Hoeven algorithm is mostly of academic interest. From Wikipedia:

I do wonder if Fürer’s algorithm would be useful for numbers having millions or billions of bits.

I’m sure the GMP folks have experimented with it and if it’s even remotely practical it will be included in their library.

Sublime understatement! If the universe has 10^80 (about 2^266) atoms like they say, then we’re going to have a hard time even representing one such number, let alone two to rub together.

Much respect to them, and I wish them success with their proofs, but what these mathematicians and their article-writing promoters mean by “perfect” and “most efficient” is not quite what us earth-bound computer folks mean.

A bit click-baity, no?

Yeah I found the article really confusing - it seems to pretty clearly say that there’s no proof that this is the best possible (though it sounds like there are some indications that it may be) and then ends with “Regardless of what computers look like in the future, Harvey and van der Hoeven’s algorithm will still be the most efficient way to multiply.”

We keep saying that we can’t do it faster, yet we’re continually making it faster.

TIL Karatsuba method. Holy that’s a really cool method.

Here’s a deep dive into similar algorithms that get used as the size of the multiplication operands increases: https://gmplib.org/manual/Multiplication-Algorithms#Multiplication-Algorithms