So on p.133 begins a list of 524 exercises occupying some 50 pages. I thought perhaps this was an error in the table of contents. One could easily spend three years attempting to solve the exercises.

It’s also interesting to see Knuth’s confession of faith on p.1, that P = NP, but only in theory.

The interesting thing is that as I’ve read things, SAT isn’t a hard problem on average - randomly chosen SAT problems tend to be easy except for a few distributions. And solving even the harder SAT problems are often tremendously sped-up by the proper choice of variables-to-eliminate in dpll (or other algorithms). So NP-complete problems (since they are reducible to SAT in general) really are more like “average-easy, worst-case-hardest”, which among other things makes them bad for any cryptography uses.

So NP-complete problems (since they are reducible to SAT in general) really are more like “average-easy, worst-case-hardest”, which among other things makes them bad for any cryptography uses.

I’m not sure that conclusion follows just from the fact that NP problems can be reduced to SAT. In most cases, reducing to SAT is non-trivial. I don’t see any reason why most SAT being “easy” would imply other problems with complicated mappings to SAT would also be easy.

Yes, it definitely does not follow solely from the reduction - in particular because randomized SAT instances are involved in one of several proposed cryptosystems for post-quantum cryptography. In general NP-hard problems are useful for that purpose but NP-complete ones are not. My understanding of this is very limited but it appears to be something to do with the ability to generate hard instances with relative ease, as joe_the_user suggests.

It depends on how you randomly choose the SAT problems. You can’t have a uniform distribution over an infinite domain, and there are an infinite number of SAT problems, so your choice of problems has to be biased in some way. Historically, there was a case where someone published a paper claiming to find a SAT algorithm that was fast on average, and a few years later, someone published a paper showing that the distribution of SAT problems they used was so biased towards being easy that it was almost overwhelmingly likely that a random choice of variable values would satisfy the formula!

But of course the particular formulas that arise from your reduction of some other NP-complete problem to SAT may not be easy on average! (Of course this may depend on the distribution you arbitrarily choose over the infinite instances of that other NP-complete problem.)

Right. It’s an extremely subtle area, and when your goal is something external to solving any particular instance of the problem, redefining it slightly can be helpful.

For example, for proving things about complexity, It was a bit of a revelation to me last year, when I finally understood the difference between 3SAT and MAX-3SAT and why both are NP-complete, but neither are NP-hard, unlike SAT, and why MAX-3SAT can be approximated in polynomial time to a guaranteed accuracy of 7/8. (If the meaning of “7/8” is unclear, look into what the “MAX” means. This is the Karloff-Zwick algorithm, if you want to read further.) This sort of result is very illuminating about the sorts of things that can be proved in general.

For post-quantum cryptography, SAT variants haven’t been used and this may be part of why. Wikipedia has a decent summary of what is useful, and there’s a paper titled “Survey of Computational Assumptions Used in Cryptography Broken or Not by Shor’s Algorithm” which is a pretty okay introduction to why the field exists. I don’t know much more than that; I have to admit it seems like a premature area of research!

Has prof Knuth said if he made arrangements for TAoCP to be continued in the event of his passing? I was looking at the table of contents, and there are still chapters on recursion (I am super interested by that one), lexical analysis and parsing to be written after combinatorials, and those are going to be 2-3 more volumes.

I haven’t heard him ever say that, and I doubt anyone else could do it, or they’d be doing it already. I mean, no other survey of satisfiability like this one exists, right?

So on p.133 begins a list of 524 exercises occupying some 50 pages. I thought perhaps this was an error in the table of contents. One could easily spend three years attempting to solve the exercises.

It’s also interesting to see Knuth’s confession of faith on p.1, that P = NP, but only in theory.

Well,

The interesting thing is that as I’ve read things, SAT isn’t a hard problem on average - randomly chosen SAT problems tend to be easy except for a few distributions. And solving even the harder SAT problems are often tremendously sped-up by the proper choice of variables-to-eliminate in dpll (or other algorithms). So NP-complete problems (since they are reducible to SAT in general) really are more like “average-easy, worst-case-hardest”, which among other things makes them bad for any cryptography uses.

I’m not sure that conclusion follows just from the fact that NP problems can be reduced to SAT. In most cases, reducing to SAT is non-trivial. I don’t see any reason why most SAT being “easy” would imply other problems with complicated mappings to SAT would also be easy.

Yes, it definitely does not follow solely from the reduction - in particular because randomized SAT instances are involved in one of several proposed cryptosystems for post-quantum cryptography. In general NP-hard problems are useful for that purpose but NP-complete ones are not. My understanding of this is very limited but it appears to be something to do with the ability to generate hard instances with relative ease, as joe_the_user suggests.

It depends on how you randomly choose the SAT problems. You can’t have a uniform distribution over an infinite domain, and there are an infinite number of SAT problems, so your choice of problems has to be biased in some way. Historically, there was a case where someone published a paper claiming to find a SAT algorithm that was fast

on average, and a few years later, someone published a paper showing that the distribution of SAT problems they used was so biased towards being easy that it was almost overwhelmingly likely that a random choice of variable values would satisfy the formula!But of course the particular formulas that arise from your reduction of some other NP-complete problem to SAT may not be easy on average! (Of course this may depend on the distribution you arbitrarily choose over the infinite instances of that other NP-complete problem.)

Right. It’s an extremely subtle area, and when your goal is something external to solving any particular instance of the problem, redefining it slightly can be helpful.

For example, for proving things about complexity, It was a bit of a revelation to me last year, when I finally understood the difference between 3SAT and MAX-3SAT and why both are NP-complete, but neither are NP-hard, unlike SAT, and why MAX-3SAT can be approximated in polynomial time to a guaranteed accuracy of 7/8. (If the meaning of “7/8” is unclear, look into what the “MAX” means. This is the Karloff-Zwick algorithm, if you want to read further.) This sort of result is very illuminating about the sorts of things that can be proved in general.

For post-quantum cryptography, SAT variants haven’t been used and this may be part of why. Wikipedia has a decent summary of what is useful, and there’s a paper titled “Survey of Computational Assumptions Used in Cryptography Broken or Not by Shor’s Algorithm” which is a pretty okay introduction to why the field exists. I don’t know much more than that; I have to admit it seems like a premature area of research!

Security proofs in cryptography have a long record of leading people astray. Remember NP-hard knapsacks?

I didn’t, but I have now read a bit about them. I guess I haven’t been around long enough. Yes, seems like an important thing to remember!

(Search terms for the curious: “Merkle–Hellman knapsack cryptosystem broken”)

You’ve probably been around as long as I have, even if under another name.

That’s possible. Not actively studying crypto topics though. :)

Has prof Knuth said if he made arrangements for TAoCP to be continued in the event of his passing? I was looking at the table of contents, and there are still chapters on recursion (I am super interested by that one), lexical analysis and parsing to be written after combinatorials, and those are going to be 2-3 more volumes.

I haven’t heard him ever say that, and I doubt anyone else could do it, or they’d be doing it already. I mean, no other survey of satisfiability like this one exists, right?

It is worth noting that Knuth implemented and submitted an entry to SAT Competition 2013 and won one category. It is dk-SAT11 in http://satcompetition.org/2013/results.shtml.

Here a description of what I think is the program:

http://www-cs-faculty.stanford.edu/~uno/programs/sat11.w

Though it could be another one from the page.

http://www-cs-faculty.stanford.edu/~uno/programs.html

Unfortunately, it’s not (the certification part is missing in above program).

And the SAT2013 submission only includes the tangled .c code. :/