1. 12

EDIT: paper being withdrawn

We (claim to) prove the extremely surprising fact that NP=RP. It is achieved by creating a Fully Polynomial-Time Randomized Approximation Scheme (FPRAS) for approximately counting the number of independent sets in bounded degree graphs, with any fixed degree bound, which is known to imply NP=RP. While our method is rooted in the well known Markov Chain Monte Carlo (MCMC) approach, we overcome the notorious problem of slow mixing by a new idea for generating a random sample from among the independent sets. A key tool that enables the result is a solution to a novel sampling task that we call Subset Sampling. In its basic form, a stationary sample is given from the (exponentially large) state space of a Markov chain, as input, and we want to transform it into another stationary sample that is conditioned on falling into a given subset, which is still exponentially large. In general, Subset Sampling can be both harder and easier than stationary sampling from a Markov chain. It can be harder, due to the conditioning on a subset, which may have more complex structure than the original state space. But it may also be easier, since a stationary sample is already given, which, in a sense, already encompasses “most of the hardness” of such sampling tasks, being already in the stationary distribution, which is hard to reach in a slowly mixing chain. We show that it is possible to efficiently balance the two sides: we can capitalize on already having a stationary sample from the original space, so that the complexity of confining it to a subset is mitigated. We prove that an efficient approximation is possible for the considered sampling task, and then it is applied recursively to create the FPRAS.

  1.  

  2. 8

    UPDATE: paper is possibly being withdrawn due to an error, sounds like in Theorem 1 on page 7.

    1. 4

      Withdrawl is now on arXiv: “Paper is withdrawn because a counterexample was found to Theorem 1”.

      1. 1

        PDF is not on arXiv since it’s been withdrawn. Anyone still have it? Or (inclusive or) details on the counterexample?

        1. 4

          You can access previous versions on arXiv. Try https://arxiv.org/abs/2008.00601v1.

    2. 4

      That is amazing. This was not especially expected, and the author themselves note that none of the big questions have been answered, but they’re all now in much more tension. Can RP be derandomized? Whether P and RP can be separated is now an even bigger question than before.

      I’m not yet sure whether the proof convinces me. The high-level sketch seems reasonable, and the pieces seem alright, but I don’t yet grok the deep algorithmic insight that makes the entire scheme work.

      The author proposes a bold new research direction: Let’s assume that P ≠ RP and that RP cannot be derandomized! Instead, let’s focus on separating P from RP.

      The paper collapses PH to BPP, which ends a lot of interesting possibilities around BPP. BPP might still equal BQP, at least; that’s still open. We still have two interesting avenues of separation; we could follow zero-knowledge proofs, or we could explore constraint satisfaction problems. Intuitively, surely CSP = NP; but we haven’t proven it, and maybe it’s time to examine that intuition more closely. Acronyms are confusing; a big diagram is helpful for me.

      1. 2

        As the author himself readily admits, this is extremely surprising. The paper passes basic sanity tests (for example, in cursory glances it does not immediately lead to the statement contradicting known results; MANY P?=NP papers do; the implied algorithm is O(n^9) therefore impractical, etc.) and the author is reputable, but I wouldn’t believe this even if Leslie Valiant claimed it. Let’s wait and see.

        1. 1

          As an aside, the random complexity classes have always been interesting to me. It feels somewhat wrong to be able to make use of randomness, kind of like how thermodynamics doesn’t let you extract useful work from pure entropy. In that vein, P=BPP wouldn’t be so surprising.

          But at the same time, the random algorithms seem so much more elegant and straightforward than their deterministic counterparts. So then P=BPP would actually be a bit unexpected: the additional expressiveness buys exactly nothing.