I don't really get why people think P≠NP anymore. I mean, isn't everyone expecting some kind of superintelligent AGI that can recursively improve itself to turn the planet into goo? Don't people think that this kind of AGI will get so good at nonlinear optimization that it will turn all humans into paperclips as a way to maximize the output of a paperclip factory? Why do we think that this insane next-level intelligence will somehow be able to do things like that, but not be able to figure out what pattern of bits gets a bunch of AND and OR gates to output a "1"?
10-15 years ago, the basic idea was "P≠NP because otherwise computers could do crazy shit." Well, looks like they can do crazy shit!
P != NP is more likely akin to a fundamental mathematical law, like Noethers theorem [0]. No amount of self improving AI will alter the fundamental laws of physics. More provincially, in our solar system, there's a cap on energy so that any system that uses it will eventually hit a ceiling (like a self improving AI that eats up energy resources at an exponential pace).
As motivation for why P != NP, one can think of it as a finite restatement of the Halting Problem [1] in the form of asking whether a (polynomially sized) Turing machine has an input that will halt in K steps.
> As motivation for why P != NP, one can think of it as a finite restatement of the Halting Problem
That's a really poor motivation. Analogically, you can think of the NP question as solving a polynomial equation over Z_2. What you're arguing is that because for Z the problem is unsolvable (due to DPRM theorem), then for Z_2, we have to try all the solutions.
That argument fundamentally ignores that the source of unsolvability is the infinite nature of Z.
For the record, I believe that P=NP (see https://news.ycombinator.com/item?id=35729626). I wish people spend more time looking for polynomial algorithm using logic approach, as I am doing (trying to extend the 2SAT and XORSAT proof to 2XSAT). This approach could give new algorithms that actually compose and can be paralelized (DIMACS is a really awkward representation).
Godels incompleteness theorem and the Halting problem are very nearly equivalent. I believe Turing, very soon after hearing about Godels' incompleteness theorem, used it to show the non-existence of the Halting problem. The consequences of Godel's incompleteness theorem and the Halting problem wove into many outstanding problems, including Hilbert's 10th problem.
I would argue that computation is a much more natural setting for thinking about these ideas. Whether is recursively enumerable sets, Diophantine equations or set theory doesn't fundamentally matter so much, as they're all equivalent in some way. I wouldn't argue that you, or anyone else, shouldn't think of it in that way, but in my opinion, for many people, myself included, casting it through a computational lens makes more sense.
Not to harp on a small point but I would argue the diagonalization proof of the Halting problem relies on there being different orders of infinity, not that Z itself is infinite. Since Godel's incompleteness, we know that the continuum hypothesis can be true or false depending on which (consistent) axioms you choose, so I would argue that it's not so much which axioms we choose for how we think about infinities that's fundamental but the fact that consistent systems of sufficient complexity must be incomplete.
I'm firmly in the P!=NP camp but I commend you for putting your beliefs out there and actually doing the work. I do believe people throw up their hands when they hear "NP-Complete" when they absolutely shouldn't. The path from problems being "easy" (polynomial) to "hard" (exponential) is not very well understood and even if a problem is, in general, NP-Complete doesn't mean that a particular ensemble of problems being drawn from couldn't be solved by some clever polynomial time algorithm.
And of course, P ?= NP still remains an open problem so we still don't understand something pretty fundamental.
> Not to harp on a small point but I would argue the diagonalization proof of the Halting problem relies on there being different orders of infinity, not that Z itself is infinite.
I am philosophically very close to finitism, and I think from the perspective of the Skolem's paradox it's not clear whether different infinities are not just a "mirage", so to speak. The distinction between finite and infinite seems to be much more "real" to me.
> but the fact that consistent systems of sufficient complexity must be incomplete
The problem is that if P!=NP, then it would seem to imply (from what I have seen) that there is a large class of finite deduction systems which are incomplete, no matter what rules of inference you pick. This to me is a lot more counterintuitive than simply accepting that finite and infinite are just very, very different worlds, and our intuition easily fails when we transfer results from one to another.
And as I describe in that other thread, with the 2XSAT being NP-complete, the idea that you have two finite sets in Z_2^n that are each relatively easy to describe (instance of 2SAT and instance of XORSAT), but their intersection (which is in 2XSAT) is somehow "difficult" to describe is just.. really really hard to take seriously. Of course it's just another intuition, but I don't see how anybody who believes that P!=NP can look at this and not start doubting it.
> I do believe people throw up their hands when they hear "NP-Complete" when they absolutely shouldn't.
I very much agree. I was originally agnostic on whether P=NP (now I am tilted to believe that P=NP and there is actually an efficient algorithm). I always felt that assuming P=NP ("assuming" in the philosophical not mathematical sense) and looking for a polynomial algorithm (what would the polynomial algorithm have to look like, for example, it would have to represent exponential number of solutions to look at them at once) is a better way to progress on the question rather than assuming P!=NP. (Kinda like if we assume that the world is knowable, we are more motivated to know it, rather than if we assume it's unknowable.)
It's also better in practice because if you actually try to construct a polynomial algorithm, and it fails, you will get a much more concrete idea what the obstacle is (a specific set of instances where it fails). I think because of the natural proofs, if you assume P!=NP, then you believe that progressing this way is essentially impossible. So I think that's the reason why people give up, because if you believe P!=NP, resolving it this way is hopeless, and at the same time, nobody has any better idea. However, if you ever so slightly believe that P=NP, and humans are just not smart enough to figure it out in 50 years (remember, it took several centuries to figure out Gaussian elimination, which seems completely obvious in retrospect), then you're not constrained with natural proof limitations and there is no reason for hopelessness.
> I am philosophically very close to finitism, and I think from the perspective of the Skolem's paradox it's not clear whether different infinities are not just a "mirage", so to speak. The distinction between finite and infinite seems to be much more "real" to me.
I think I'm also in the same camp (finitism). My view is that infinities are kind of "verbs" rather than "nouns", even if we use them as if they were nouns for short-hand. I guess another way to say that is infinities have algorithms associated with them, where we kind of "turn the crank" and get more output.
> And as I describe in that other thread, with the 2XSAT being NP-complete, the idea that you have two finite sets in Z_2^n that are each relatively easy to describe (instance of 2SAT and instance of XORSAT), but their intersection (which is in 2XSAT) is somehow "difficult" to describe is just.. really really hard to take seriously. Of course it's just another intuition, but I don't see how anybody who believes that P!=NP can look at this and not start doubting it.
I don't understand the hesitation here. Once you have a fundamental gate (NOR, NAND, etc.) you have universal computation. In my opinion, 2-SAT is just barely in P because of the extraordinary restriction that each clause have 2 variables. As soon as you relax that condition, it's easy to do arbitrary computation.
> ... if you assume P!=NP, then you believe that progressing this way is essentially impossible.
I think we're in agreement here. My personal view is that it's not so much better to assume P=NP as it is to understand that there are different ensembles of problem space to choose from and even if the larger "space" is NP-Complete, the ensemble might only produce "almost sure" polynomial solvable instances. This is the case for Hamiltonian Cycle problems on Erdos-Renyi random graphs, for example.
> ... then you're not constrained with natural proof limitations and there is no reason for hopelessness.
I do wish I understood the natural proof limitation a bit better. I suspect there's a loophole somewhere or, at the very least, pointing us to another region of proof technique, but I just don't understand it well enough to know what's going on.
> ... now I am tilted to believe that P=NP and there is actually an efficient algorithm ...
I guess I don't have much hope of convincing you otherwise but I would like to point out again the "finite Halting Problem" interpretation. For an arbitrary program F(inp,K) (input `inp` and runs for K steps as a parameter), you're saying that the following can effectively be short circuited from an exponential search to a polynomial one:
for inp in [0 .. 2^{n-1}] do
if F(inp,K) then return true
done
return false
To my eyes, if this can be short-circuited, it's essentially saying that for arbitrary functions, there's no actual reason to do computation as it can be short-circuited. In other words, you're saying one can do better than raw simulation in every single case. While potentially true, it seems like a pretty bold statement.
> In my opinion, 2-SAT is just barely in P because of the extraordinary restriction that each clause have 2 variables. As soon as you relax that condition, it's easy to do arbitrary computation.
I think your intuition is wrong here. The number of variables in the clause is not the reason why NP is difficult. XORSAT also has arbitrary number of variables per clauses, and is doable in P. Furthermore, the 2XSAT reduction shows that you can represent any 3-SAT clause only with 2-SAT and XORSAT clauses.
> To my eyes, if this can be short-circuited, it's essentially saying that for arbitrary functions, there's no actual reason to do computation as it can be short-circuited. In other words, you're saying one can do better than raw simulation in every single case. While potentially true, it seems like a pretty bold statement.
I don't think that's what P=NP would imply. Remember, if P=NP, the algorithm is polynomial in the worst case, it can still be true that for a portion (even a majority) of actual instances, the simulation is faster than applying the algorithm. (Also it's possible that for some instances, functions that already have been properly "inverted", the decision algorithm and the simulation are equivalent, this is similar to linear equations, once you have factorized the matrix to the upper echelon form, then application of the matrix is the same process as solving it.) What the result will say that you don't need to be exponential in the worst case.
> I think your intuition is wrong here. The number of variables in the clause is not the reason why NP is difficult. XORSAT also has arbitrary number of variables per clauses, and is doable in P. Furthermore, the 2XSAT reduction shows that you can represent any 3-SAT clause only with 2-SAT and XORSAT clauses.
XORSAT is not NP-Complete because a chain of XOR boolean functions can't be chained to create arbitrary computation. 2SAT is restrictive because, even though it has a "fundamental" gate (NAND/NOR), the "splay" of variable interaction is so limited. In some sense, 2SAT and XORSAT, each individually, are conspiring to prevent general purpose computation.
> I don't think that's what P=NP would imply. ... What the result will say that you don't need to be exponential in the worst case.
If P=NP, then the program I listed above can be short circuited from exponential search to polynomial.
I don't think you understand. Sure, 2SAT and XORSAT are, in isolation, too weak to be NP-complete. However, if you combine them, and allow for both types of clauses in the same instance (on a common set of variables), then surprisingly it is NP-complete (I call that class 2XSAT), by reduction from 3SAT.
This is, to my knowledge, a previously unknown reduction and the reason why I now strongly believe that P=NP.
Yes, I think I understand, both why you believe P=NP and what the 2XSAT construction is.
You're basically saying "Look at this polynomial problem and this other polynomial problem, if we combine them in a natural way, suddenly it becomes exponential?". My point is that your intuition is wrong. For example, XOR by itself is not Turing machine equivalent, nor is NOT, nor is AND and OR by themselves, but combine some subset and suddenly you get NAND, NOR etc. which gives arbitrary computation.
One of the interpretations of the Halting Problem is that one can do no better than raw simulation. The intuition, by extension, is that for NP-Complete problems, one can also do no better than raw simulation (search the space for solutions). If we could do better than simulation, if there were a "short circuit" to the computation, then the program I listed above, which looks suspiciously like running arbitrary program with some time and space bounds, would not need to run in exponential time.
Arbitrary computation is the norm, not the exception. 2SAT and XORSAT are the exception because of their restrictions putting them in polynomial time and their inability to do arbitrary computation. If their fragility is disturbed, even by adding just a whiff of freedom in the form of more gates or arrangement, then they suddenly allow for arbitrary computation.
You believe your addition is just a slight nudge so the topple to arbitrary computation and, potentially, exponential time seems drastic, but you have a bias on what's "natural". For example, you could have decided to add some portion of 3 variable clauses to the 2SAT instance to see how quickly it blows up (if at all). Or you could have combined some different proportion of XOR clauses to the 2SAT instance to see how quickly it becomes intractable (or if it stays tractable). Which is more natural? Which is the smaller nudge? Why is your nudge not a shove?
FYI, there's a lot of work done for NP-Complete phase transitions. When I said this is poorly understood, I mean it. For all intents and purposes, "random" 3SAT (each clause chooses which variable uniformly at random, with equal probability of being negated) is, for all intents and purposes, solvable, even for millions of variables. As far as I know, there isn't a good proposal or understanding for a "natural" and "random" 3SAT ensemble that remains difficult as the scale increases.
> You're basically saying "Look at this polynomial problem and this other polynomial problem, if we combine them in a natural way, suddenly it becomes exponential?". My point is that your intuition is wrong.
Yes, but that's not the only thing I am saying. Look more closely at what 2XSAT really is - it is basically a 2SAT transformed on an affine subspace of Z_2^n. You claimed that the complexity of 3SAT comes from 3 variables per clause, and not only two, as in 2SAT, but here I am showing an NP-complete class that has the same property as 2SAT!
We can also restate 2XSAT reduction in this way - the only clause you need to have to be NP-complete is a clause that looks like (A XOR B) OR C (for arbitrary literals A,B,C). As you can notice, this clause already has the property of 2SAT clause (it forbids 2 combinations of A,B,C out of 8) rather than of 3SAT clause (which forbids 1 combination of A,B,C out of 8).
I think you're actually assuming the conclusion, if you think that because something gives rise to "universal computation", then it must be difficult. But that's possibly silently assuming P!=NP.
Still, the main point that 2XSAT reduction brings (at least to me) is that it hints that there is a clever generalization of both algorithms for 2SAT and XORSAT that runs in P (and if it doesn't, it will at least explain more clearly what the issue is). So that's my strategy, I am generalizing the proof of 2SAT to handle what I call generalized literals, XORs of any number of variables, i.e. linear equations, rather than simple literals.
> Or you could have combined some different proportion of XOR clauses to the 2SAT instance to see how quickly it becomes intractable (or if it stays tractable).
I don't understand this paragraph. I am not sure how I would measure whether a particular instance of 2XSAT is tractable or not. (I think they are all tractable, but I am still working on it.) I think the "hard" but satisfiable instances in 2XSAT have a really small dimension of affine subspace defined by XORSAT portion, and weak enough 2SAT portion to disallow all solutions.
> FYI, there's a lot of work done for NP-Complete phase transitions.
I am really not into phase transitions, but I think 2XSAT being NP-complete could explain them pretty nicely. I think most of the observed behavior comes from XORSAT portion dominating hard but satisfiable instances. AFAICT XORSAT exhibits similar phase transition, where there is only relatively small number of low-dimensional solutions to a random instance of XORSAT - either they have many solutions or the system becomes unsolvable. The only difficulty in analysing this is that 2SAT portion of 2XSAT, if it is dense enough, can also contribute to XORSAT portion (by forcing some variables to be constant or equal).
The catch here is that P vs NP talks about worst-case behavior. Many SAT solver inputs can be solved quickly with good heuristics, but there are some that are still difficult no matter what you do. One way to see this is via the phase transition behavior of SAT.
If you have too few constraints it's easy to find a solution to the problem. If there are too many constraints it's also easy to show there are no solutions. But in the middle there's a sweet spot in the ratio of constraints to variables that's when the problem becomes fiendishly difficult.
> but not be able to figure out what pattern of bits gets a bunch of AND and OR gates to output a "1"? 10-15 years ago, the basic idea was "P≠NP because otherwise computers could do crazy shit." Well, looks like they can do crazy shit!
I think you've fundamentally misunderstood the meaning of the P!=NP problem. In very simplified terms, it's not about what a computer can do, but about how long it takes to do something in the worst case.
There already exist local-search based algorithms that can find solutions way faster than a SAT solver can... Or they get completely stuck unable to make any progress.
All an LLM does is guess the next token based on the previous tokens and its training weights. For it to give you a solution to a large SAT problem it'll have to spit out a million character long binary string.
The likelyhood most of that string will be entirely hallucinated is very high. LLMs are not magic.
Deep Learning is already used internally in some SAT solvers to heuristically pick in which direction the search should go.
> "10-15 years ago, the basic idea was "P≠NP because otherwise computers could do crazy shit."
Everyone is downvoting you because that's not the mathematical explanation, but it's absolutely the informal explanation that the 'pop science' ones were saying.
I don't mind the downvotes as I expected this would be somewhat provocative, going against the general grain that people think P≠NP. I agree that, in general, this is not a mathematical explanation for why P≠NP, but a rough heuristic to explain why people think it is true, predominantly in pop sci circles. I would also emphasize that there is no mathematical explanation for P≠NP - or else the problem would be solved - and at the end of the day, the reasons that computer scientists suspect this to be true are equally heuristic in nature, not amounting to much more than "we tried to solve these problems and couldn't do it."
> "People think that P≠NP, because they tried it hard, and did not find any NP-hard problem in P."
That was the informal explanation given to the computer scientists. But that's not so convincing to non computer scientists and it's not the informal 'pop science' explanation.
An example (not the only one) of the kind of 'pop science' explanation I mean, is Impagliazzo's Five Worlds (https://gwern.net/doc/cs/cryptography/1995-impagliazzo.pdf). One of those hypothetical worlds he called 'Algorithmica' and it's where P = NP. One of the amazing and outlandish things that could be accomplished in such an exotic world would be the following feat: "Thus, a computer could be taught to recognize and parse grammatically correct English just by having sufficiently many examples of correct and incorrect English statements, without needing any specialized knowledge of grammar or English."
It's not so wild to me, to think that if someone's understanding of P vs. NP was from that kind of pop science article, then they would think we should start considering more seriously that we are in the Algorithmica (P = NP) world where such feats are possible!
If P = NP because there's a O(N^{Graham's Number}) polynomial algorithm - a 'galactic algorithm' indeed - then the lay description is theoretically correct, but meaningless in practice.
In particular, the hand-waving is the phrase 'sufficiently many examples.' This may be impossible to provide even if only a googol (10^100) examples are needed, because of mass and energy limitations of the universe - there's only so much you can do with 10^80 atoms, and the speed of light is too slow.
The Wikipedia entry for "galactic algorithm" at https://en.wikipedia.org/wiki/Galactic_algorithm even mentions SAT: "a hypothetical large but polynomial O(n^2^100) algorithm for the Boolean satisfiability problem, although unusable in practice, would settle the P versus NP problem".
Algorithmica may therefore be very little different than our world, other than that people no longer debate if P = NP.
Thank you. I stand corrected, people indeed argued "10-15 years ago, the basic idea was "P≠NP because otherwise computers could do crazy shit." (1 person at least).
Informal, as in, poorly-researched clickbait pop-science. Most of the serious pop-science sources (Scientific American, Quanta Magazine, Computerphile, etc.) have explained it correctly, without restoring to woo.
10-15 years ago, the basic idea was "P≠NP because otherwise computers could do crazy shit." Well, looks like they can do crazy shit!