There are a lot of definition errors which may also be factual errors in this article. I feel this could lead to a lot of confusion (which leads to more articles including that confusion which would lead to more confused readers, …).
tl;dr We’ve typically considered it a deal-breaker to discover that an algorithm we care about is NP-hard.
Algorithms are not NP-hard, problems are. You can have more than one algorithm (quicksort, mergesort, bogosort) for the same problem (sorting).
Algorithms have inherent complexity (worst-case running time among others). Problem have inherent difficulty. Algorithms do not have any associated notion of difficulty.
If you make the substitution “algorithm is/isn’t NP-hard” with “problem is/isn’t NP-hard”, a lot of the sentences makes much more sense.
A problem in NP is hard to solve
No, not all problems in NP are hard. In fact, P is contained in NP (and we just said all of those problems are easy).
NP-complete (which is not the same thing as NP) problems are (presumably) hard to solve. NP-hard (which is also not the same thing as NP and not the same thing NP-complete) problems are also (presumably) hard to solve, possibly harder than the NP-complete ones.
NP problems are those which can be solve in polynomial time on a non-deterministic machine. If fact, its in the name “Non-deterministic polynomial”. A non-deterministic machine is a regular machine with an extra instruction: a special superfork() that make a copy of the machine (and puts a copy of the process in each forked machine like regular fork).
NP-complete problems are problems in NP that are also “as hard as” any problem in NP.
NP-hard problems are those that are at least “as hard as” any problem in NP but not (necessarily) known to be themselves in NP. (Or rather, being NP-hard makes no statement of whether the problem is in NP or not.)
I would argue that there’s a third possibilty, which is to just…shrug. NP-hard is a worst-case bound on your algorithm. It might only apply to “pathological” inputs.
This is a really bad conclusion.
By analogy, you could also say “the library’s test don’t pass but people might not be using it in the manner of those pathological test cases”. Or “these asserts fail but maybe the program state fixes itself in all but the pathological cases”. The level of (un)certainty is comparable.
Instead what you are supposed to do is exactly what’s discussed for the Go ecosystem.
While researching this problem for the Go ecosystem, Russ Cox pointed out that you can simplify the problem by prohibiting certain kinds of negative dependencies
You should describe the actual problem more accurately. Maybe this particular refinement of the problem statement now undershoot your needs (the original one probably overshot by a lot). But now the task is to refine it further (or refine from an earlier version).
For example, at most 100 negative dependencies are allowed.
There are actually many other things to try in the face of NP-hardness. And from experience, when those all fail, you’re never so “lucky” that the input you care about isn’t pathological. So this kind of luck can usually be explained so that you are once more certain instead of running your program and hoping for the best.
No, not all problems in NP are hard. In fact, P is contained in NP (and we just said all of those problems are easy).
Even this interpretation of P=easy is dubious. The class P is huge, and it contains very difficult problems that cannot be solved by algorithms that run faster than O(n^(googolplex)). The fact that a problem is in P does not mean that it can be solved in practice; and by the same reason a problem in NP does not mean that a candidate solution can be verified easily. It is true that many of the well-known polynomial algorithms run in linear or (at worst) quadratic time, but this is a limitation of our current knowledge, not of class P.
I remember playing with Slackware around 20 years ago - it was so nice fast distribution with dead simple package management. Non-standard software you had to compile yourself with all the dependencies resolved on your own :) Oh fond memories…
I started off with Slackware in 1996. I got a six-pack (I believe) of Linux distros on cdrom. It shipped Red Hat, Debian and Slackware.
My friend told me Slackware is the hardest-core so I decided to jump in the deep end immediately.
Zero regrets!
Same for me but two years later, 1998. Using Slackware was likely one the biggest contribution to my career. The fact that it forced me to really understand what was going on taught me so much about operating systems, software development and open source. I haven’t used Slack in a looong time, but I hold it dear in my memories!
Exactly! And these were times before Google, even, so everything had to be learned the hard way. Like patching Joliet support into the kernel for cdroms.
LFS was another big thing. I ran that for quite a while too, think with Debian packaging.
The switch to Debian, after forays into Red Hat and Mandrake, felt like a nice retirement from that, like something I’d earned through self-education.
Those were likely Infomagic ones, like these https://archive.org/details/Linux_Developers_Resource_InfoMagic_March_1995_Disc_1
Nowadays the package management is as simple as ever, but the collection of binary packages is huge and very up to date (much more so than ubuntu). For example, there are only a few days between a new release of gcc or clang and their official slackware packages, so you always run a really modern system.