I tried to argue against superdeterminism the last time this came up and got a pile of downvotes because people mixed it up with determinism. And I see this is happening all over again in the comments below.
I'm not even going to try this time, I'm just going to say to everybody reading this: superdeterminism is not at all the same thing as determinism. It is a far stronger assumption with far far more unintuitive consequences for our understanding of nature. If you're reading this and just thinking "superdeterminism is okay because there's no free will", then you've been suckered by this article into believing a massive oversimplification.
I really agree with you here. Superdeterminism is much weirder and harder to accept than non-locality. Of course, with enough non-locality you'll end up with something just as awkward as superdeterminism. I'm trying learn more about decoherence as an alternative to wave-function collapse.
I'm listening to the Into to QM course from mit's open courseware [0] and I have to say that QM represents a complete break with the classical past, not because of a scientist's ambition or a quirk of history, but because the experimental evidence demands it. The evidence results in a few postulates, and QM is really the only theory that satisfies the postulates, in the sense that any theory that satisfies those postulates will look like Schroedinger's eq. The story is not over at all, we're still very much at the beginning of understanding it.
To me decoherence always seemed so obviously the solution to these 'problems in QM' that I genuinely don't understand why are still having these quasi-scientific discussions. Am I missing something or is there a ton of uninformed arm-chair science going on?
What are the scientific arguments against decoherence?
What do up-to-date theoreticians think?
Think of it this way - decoherence depends on regions of the wave function more or less becoming isolated from one another in such a way that the results of experiments for classical things in those regions match our results. The wave function is still fundamental, but classical physics emerges as a limit.
The problem with decoherence is that the underlying physics of the wave function is still profoundly non-local in the sense that regions of the wave function don't have a simple relationship with regions of physical space.
And yet, classically, the notion of locality pertains precisely to physical space and is deeply related to fundamental physics. In fact, locality is still fundamental to the formulation of quantum mechanical theories, even if the quantum mechanical description ends up having some non-local features. And there isn't any philosophical or physical intuition that resolves this disconnect.
Decoherence has a variety of other philosophical issues. In particular, it requires that we accept the idea of the wave function (something we never see or interact with directly, for which we have no direct evidence) as fundamental and real AND that we take our day to day experiences, upon which all of our physical sciences are based, as derived, perhaps even, in important ways, not really real. In any case, the actual theoretical terms in which decoherence actually resolves the measurement paradox aren't fully understood either mathematically or in terms of the fundamental ontological status of things.
Thank you for this very well articulated response.
I don't see why locality is a requirement. What is it that makes a theory with particles being points in a six-dimensional position-momentum space acceptable, but particles being complex-valued functions over a three dimensional space unacceptable?
> it requires that we accept the idea of the wave function (something we never see or interact with directly, for which we have no direct evidence) as fundamental and real AND that we take our day to day experiences, upon which all of our physical sciences are based, as derived, perhaps even, in important ways, not really real.
I see no issue in pure quantum states being fundamental. Our day to day experiences are not compatible with a number of things we hold to be true. Take the physics of fluids for example, it suggests that liquids are infinitely dividable, which we know to be false. In that sense, fluid physics is decidedly not real. But it can also be derived as a very good approximation of the underlying reality on larger scales, similarly to how classical theories are good approximations of the underlying quantum reality on larger scales.
I do realize that my interpretation requires decoherence to work such that the pure quantum states reduce to ones that are well approximated by classical theories, and I'm not sure if we have evidence that decoherence works this way.
No mainstream physicist really objects to decoherence - it is obvious. But just decoherence doesn’t give you single outcomes - it gives you many worlds.
And people do debate how to derive our single world experience from many worlds. It can’t be done without more assumptions.
Many in this field do accept it, but say the other worlds are not real (QBism, dBB).
But that position is philosophically weak, so those against many worlds still look for alternatives.
Decoherence does not give you Many Worlds, or at least not unless you interpret it that way.
Decoherence or more strongly environmental super-selection from something like electromagnetic scattering, results in a Classical probability distribution over the macroscopic observables or more accurately renders the algebra of classical properties Boolean. This means there is no interference between the terms and the probabilities are simply ignorance of facts which have occurred.
Once this superselection process has occurred the mathematical structure of macroscopic observables is just as it is in classical statistical mechanics. There's no need to read this as multiple worlds, although you can if you want to. If interference terms persisted you might have more of a case for Many Worlds. Even then though there are other ways of reading the formalism.
I think when you add in the word "experience" you turn the physics problem into a philosophical one, and every pragmatic scientist wanders off to work on something else. Many worlds is totally sufficient for every question except for the nature of consciousness, and there are some very good reasons to believe that consciousness is non-empirical.
I don't see many-world arising from decoherence, please elaborate.
Decoherence doesn't give you single outcomes, but it gives you a classical probability distribution (like an enthropic ensemble) over pure quantum states, with the pure quantum states having reduced coherence (i.e. they 'look classical').
Classical probability distributions are nothing new, we don't need a many worlds interpretation to explain the butterfly effect.
Quantum states with a small amount of residual superposition also seem fine to me, as long as you are willing to accept that the world is ultimately quantum and not classical. That we don't see any quantum effects in daily life is just because the scales are too small, similar to how we don't observe relativistic effects because the scales are too large, or how we don't observe the atomicity of water. But in all these cases we can do experiments to reveal the true nature.
So during decoherence you don’t have classical worlds - the probabilities interfere so you can’t ignore the other terms. Over time that interference reduces, but as you say never disappears completely.
But at no point does one world even approximately emerge - it’s always many. I can say only the one I experience is real, but there’s no justification for it.
Your main problem though is thinking classically - you can't justify your theory by saying it can be reduced (after an infinite amount of time) to an old way of thinking. Classical probability is fraught with issues; just saying it’s always been acceptable isn’t true nor a rational argument.
But, uh, a single world experience is trivially compatible with many worlds — in every world, the human is in a pure state that corresponds to a normal human experience of continuously living in a single world. If we don't require the conscience to be a single supernatural entity that flows along the timeline, selecting a world to visit at every branching point, then... that's it? Nothing else that still needs to be explained?
I apologize for the naivite of my line of thinking but wouldn't the locality of Relativity slot in at that point? Other realities could all be real, but only a subset could be real/accessible from the perspective of a given measurement device. As a guy who reads popular books on the subject to fall asleep, that seems like the obvious place for the two theories to couple. What am I missing?
Decoherence on its own still has a basis problem. You need superselection to reduce that to one basis. However this has been shown long ago (1980s) so in essence decoherence + superselection does solve these problems.
If you're not familiar with these terms I can explain.
Just out of curiosity, what is weird about non-locality? From my super naive perspective that's just saying that things don't necessarily work underneath the hood the way they appear to work. For me (super naive, remember ;-) ), that seems completely reasonable even if it might be very inconvenient. What am I missing?
You're missing special theory of relativity. Nature is local. When you add nonlocality, contradictions arise, you can try to just ignore them or patch them with ad hoc hypotheses, such things were tried before and turned out to be failures indicating that the premise is wrong.
Interesting. If you have some pointers for something to read that discusses why special relativity requires locality, I'd love to read it. I have no real idea where to start searching.
Edit: Just to be clear, I'm aware that Bell's theorem says that QM must either break locality or realism, but I don't really understand why it can't break locality. While incredibly inconvenient, wouldn't that solve the problem? Again, I realise I'm naive, so I don't actually suppose my line of reasoning is correct ;-)
Special relativity is essentially an explanation of why the speed of light is a constant regardless of how you measure it. That is, if you're in a train moving at half the speed of light relative to the ground, and someone fires a laser in the same direction as the train from the last station you passed through, that laser beam will move towards you at at the speed of light. If you fire a laser back, it will reach the station at the same time as the laser from the station reaches you (as seen by an observer in the station).
This makes no sense unless the speed of light is a fundamental physical constant, so that motion in general depends on the speed of light, which is what special relativity postulates.
Now there are ways to have a special kind of non-locality that do not violate special relativity - you can have phenomena that happen at infinite speed, but only if they do not carry mass or energy or any information at all. The common interpretation of wave-function collapse is an example of such a phenomenon.
I'd also note that the famous E=mc^2 is also a limit on speed, since kinetic energy (mv^2/2) is part of the total energy of an object.
There are interpretations of quantum mechanics that give up on locality, most notably the Pilot Wave Theory[1]. It does work, and it is compatible with relativity.
I think that may be the reason it's not very popular: ok, so we've got these faster-than-light pilot waves, but we can't actually use them to do anything faster than light. They're just there for bookkeeping. (That said, Many Worlds suffers from the same problem, but it's very popular. They're two different ways of slicing up the same equation. You pick whichever one suits you.)
Physics is trying to fit reality to an equation, it is not reality itself. We don't know what an atom "is", we just know how it behaves with high precision.
If the simplest and most consistent math is a non-physical pilot wave, I don't think this really matters if it lets you calculate something more easily or correctly. I don't personally know how to use them (my five QM courses used traditional techniques) but if they give useful results it hardly matters if they're "real".
My good friend did his undergraduate thesis by noticing that Clebsch–Gordan coefficients could be used to describe grain boundary orientations in polycrystalline materials. Doesn't mean grain boundaries have spin. It's just math that was convenient and worked well.
There's a lot to be said for shutting up and calculating. If I were a physicist, I might subscribe to that myself. Since I can't calculate myself, I try to remain agnostic even to that extent.
That said, physics advances do sometimes come from asking "What if X is real?" The positron and electron spins are both poster children for that. Instead of just shutting up and calculating, people focused on the part of the calculation that seemed to imply the existence of an unobserved thing. We could, in fact, have kept going with a physics in which positrons were merely calculation conveniences; that physics is valid. But we might not have discovered the Standard Model that way.
So I'm of two minds... and in a lot of ways, I'm not really entitled to be of any minds, since my formal education stopped at undergrad, and I'm no longer capable of doing even that much math. I get leery when people with even less education want to "understand" without doing any of the math, because I fear that the best of explanations will only mislead them.
I'm not sure I understand why you see this as a dichotomy. Sometimes inspiration comes from a weird idea, sometimes it falls out of mathematical analysis.
It's not like it is exclusive, everyone thinks a bit different thankfully. Like your example of the positron and electron seems fine; math and experiment in a cycle of discovery. You wouldn't know to look for a positron if you didn't study the electron experimentally and try to come up with some math for it.
Contradictions are inconsistency in the theory, i.e. the theory can give different results depending on how you compute. To evade this you need to apply abstract reasoning outside of theory to decide how to compute in every situation. This means theory doesn't work by itself, i.e. it's not an objective theory. Also by realism Bell means hidden variables, not realism at large.
I can't see what's so hard about it either. Nor what would be the problem with something like hidden states/variables. Why would it be so hard to assume that there could be hidden states in partcles which we simply can't measure (maybe not yet)? Why does the world has to be directly measurable? Who told people that they ought to be able to measure every single variable directly (like hidden state of a quantum particle), why are they assuming that?
You should look into Bell's theorem. It is mathematical proof that (discounting superdeterminism) there is no way to explain QM observations with local hidden variables. You could have hidden variables, but only if they produce effects at infinite speed.
The big problem with infinite speeds is that, somewhat like superdeterminism, they mean that you can't do fully controlled experiments. If effects can propagate at infinite speeds, the whe universe has an impact on any experiment, including the state of your measuring apparatus and so on. That doesn't make them impossible, but it explains why they are disliked in theories.
I know that they cannot be local. My point of view is "just let them be global, build theories from there", global hidden variables don't interfere with any intuitions about the world for some reason.
The problem is you can't really build theories from global hidden variables. If the details of any experiment depend significantly\* on the state of the entire universe, until we can account for the entire universe in our measurements, we might as well stop measuring.
\* even with Newtonian physics, the universal attraction of any object does have non-0 values everywhere, but we know that the influence is negligible. However, with global hidden variables, the speed a billiard ball will take when I hit it may depend on the size of a planet in a different galactic cluster.
> If the details of any experiment depend significantly\* on the state of the entire universe, until we can account for the entire universe in our measurements, we might as well stop measuring.
Every experiment does depend on the entire state of the universe, even in QM, but those influences are typically small due to symmetries. At the quantum level, many of these symmetries no longer apply.
I also have a hard time disbelieving in global variables. We have a lot of evidence validating quantum field theories, and the fields in QFT are global.
Only in a trivial sense. Quantum Field theories are explicitly local theories, constructed from Langrangians which purposefully have and express Lorentz invariance, exactly to maintain locality.
In any case, quantum field theories are good at predicting stuff but almost certainly not descriptions of the true fundamental dynamics of the universe, given their known and relatively well understood divergences.
> Superdeterminism is much weirder and harder to accept than non-locality.
I disagree, with the following example to back up why I believe it is less weird.
Superdeterminism can mean that faraway events can be correlated by a common ancestry. For instance: if you suddenly create a massive object, it will attract massive objects indiscriminately spherically; most points in space will be eventually affected, and so, they all are limited in the space of possibilities, no matter whether you can actually detect gravity.
In the case of quantum mechanics, there may well be some currently-undetectable field similar to the gravitational one, which is very chaotic at a nanoscopic level, but that is severely constrained in the shape it can form, even across large distances.
It is similar to how a large-space LCG (the PRNG) may look extremely random, but if you plotted consecutive numbers as coordinates across the complete cycle, you would get a lattice. Locally chaotic, but globally constrained.
On the other hand, non-locality means superluminar information, which really breaks the common understanding of spacetime and of causality.
"The implications of superdeterminism, if it is true, would bring into question the value of science itself by destroying falsifiability, as Anton Zeilinger has commented" [1]
> The implications of superdeterminism, if it is true, would bring into question the value of science itself by destroying falsifiability, as Anton Zeilinger has commented
Except Zeilinger is wrong. The freedom of the experimenter is not fundamental to the scientific process, rather the nature of the experimenter's complexity is what matters. Even simple deterministic algorithms can explore an entire state space, and given we are capable of simulating such algorithms with our brains (Turing completeness), we are therefore also capable of exploring the full state space of physical theories.
Furthermore, it is not a false picture of nature at all. It very clearly describes the behaviour of that which is observable, which is exactly what science is designed to do.
A simple description of superdeterminism would be that it's a theory of hidden variables where those hidden variables evolve with intelligence level complexity. It's a traditional method to evade falsifiability, yes :)
Yep. Superdeterminism as a particular physical hypothesis has nothing to do with a philosophical notion of determinism or debates about free will. The latter is really the subject of metaphysics which by definition is not physics.
It is like mixing implementation details of a particular Python script with a theory of programming languages.
Someone who knows, that determinism can be a metaphysical concept (depending on which determinism one is talking about). Not many people get that. Good to see some people actually read about it or thought about it. This is what I always try to explain to people trying to counter it with quantum mechanics, as if it was a proof against determinism.
I can try for grandparent. In physics all equations including quantum mechanics are deterministic in the sense that if one knows the initial state of the universe then one knows evolution of the universe after and before. Moreover, in classical physics the assumption was that if one knows the state of some local patch of the universe at some moment, then one in principle can tell the near future and past of that local patch without knowing the state of the rest of the universe.
Experimental observations of violations of Bell equations tell that no, one cannot tell the evolution of the local patch from the patch alone. Standard interpretation of quantum mechanics and physical super-determinism are just different ways to explain this.
In particular the standard quantum mechanics assumes that things are still local, but the wave function is not observable in principle so we can only talk about statistical properties. Super-determinisme assumes that things are not local and tries to explain how.
In philosophy determinism is essentially the opposite of free will. It implies that what people perceive as personal free will is an illusion. But this has nothing to do with the determinism of physical models. In particular, free will is compatible with physical determinism of what one perceives as an external world. One possible explanation of how this is possible is that the act of free will changes both future and past. So it looks like the future state reflecting the choice of will is deterministically follows from the past. It is just the past is different from what would be if the choice would be different. Stanford encyclopedia entry on free will has more splendid explanations.
Thanks, this was very clearly written, though I'm already familiar with it. If I got it right, you're saying that the levels of determinism refer to the difference between physical determinism and metaphysical ideas (of which the idea that a conscious being's will influences both the past and the future is an example)?
> But this has nothing to do with the determinism of physical models.
It seems rather confusing to state it has nothing to do with determinism of physical models. More accurately, it does have to do with determinism of physical models if you assume a physicalist perspective, but it might not, though then you have to resort to much more involved and comprehensive models of what consciousness and will are (like changing both the future and past).
Physics cannot address the question of free will at all, as all our experiences tell that at least globally universe is 100% deterministic. So one need to go beyond physics to address that.
This is similar with the notion of time. A typical perception is that only now exists. Yet according to physics there is no now. All our physical models based on experience imply that the universe is 4-dimensional static something. There is no now and all points across the time dimension have same properties just as points across space.
One needs metaphysics to try to explain this discrepancy between perception and very successful physical models.
> as all our experiences tell that at least globally universe is 100% deterministic.
I'm not sure what you mean by this so I'm also not sure how to address it, but it does seem reasonable to assume free will simply does not exist exactly because phenomena is either deterministic or stochastic, not some third option which would allow free will. This view is informed by physics.
> There is no now and all points across the time dimension have same properties just as points across space.
This is a much more interesting problem and one that has kept me up many times.
I meant all our fundamental physical models are fully-deterministic globally. The only exceptions are singularities of General Relativity, but even for those the believe is that a proper accounting of quantum effects should resolve this. We build those models based on experience. So here is comes the contradiction with personal perception. One can always say that it just implies that free will is an illusion. But as there are other ways to resolve this that keeps free will and are compatible with apparent determinism of external world, the inevitable conclusion is that physics cannot resolve the issue of the free will.
As for the problem of now, for me it is similar to the problem of free will. Starting from Parmenides and Buddha one way to resolve this was to declare that the perception of now and movement is an illusion similar to the notion of free will. And as with free will, that will be compatible with physical models and the opposite cannot be expressed within physical models.
Evolution of the local patch is predictable from the patch alone. Violation of inequation is when this evolution has correlation with a distant patch. Copenhagen interprets this correlation as causation, hence FTL.
> Evolution of the local patch is predictable from the patch alone.
How do you know, that there is no non-local influence, that makes your predictions "from the patch alone" incorrect? I don't think this can be easily excluded as a possibility.
We can assume that there is no non-local influence and try to make progress from there, but we might be wrong about it, which is what the article is getting at, if I understand it correctly.
> Evolution of the local patch is predictable from the patch alone.
Only if you take the experimenter and his decision as part of the local patch, and take the decision to be determined by the same state which also determines the experiment's outcome, which is essentially what superdeterminism is, no?
Every time I search for it, I seem to need to go through many websites with wishy-washy explanations. Then I found: http://catdir.loc.gov/catdir/samples/cam051/2004045179.pdf where it lists 4 types in the table of contents (just search for "determinism" in the document). However, some other websites list more types, where some of them imply the others. For example: https://www.philosophybasics.com/branch_determinism.html Recently I had an interesting discussion with a coworker, but I cannot find the website we shared to clarify, what I meant by "deterministic".
Basically the metaphysical determinism says, that everything is predetermined and if something seems random, it is simply because of something we do not know yet or something that is too complex to be calculated, so that we cannot predict the event that seems random. Whatever physicists come up with, for example quantum whatever, one can always say: Well, it seems random, but I believe, that there is something we have not yet discovered or don't yet know about, which makes things behave exactly as they are, completely deterministic.
At that point it becomes a believe, not a science. You can always add an unknown (or "hidden variable"). Personally, I do not think this believe is in any way worse, than the believe, that something "simply happens at random" with "no theoretical way of explaining why". Probably metaphysical determinism in one way or another has always been a big motivator for scientists to continue research.
> But because of the historical legacy, researchers who have worked on or presently work on Superdeterminism have been either ignored or ridiculed.
is too strong. I would say that the historical legacy does not have much to do with it - the reason that superdeterminism is ignored or ridiculed is that it looks absolutely wild to most physicists - much more mind-bending than the vanilla story of the Bell test, which is mad enough to begin with. That's not to say that it is ruled out - just that we have avoided it for pretty sensible reasons, rather than stupidity or some sort of blind spot.
> That's not to say that it is ruled out - just that we have avoided it for pretty sensible reasons, rather than stupidity or some sort of blind spot.
I have to question the validity of this argument, because generations of physicists have been taught to give up realism in order to accept QM. Superdeterminism is no weirder than giving up realism, it's just a weirdness to which you've grown accustomed.
I think of superdeterminism not as a theory but as a barometer - the Bell's Theorem world we live in is baffling enough that people are willing to consider superdeterminism as an explanation.
> There is no such thing as a controlled experiment
> The superdeterministic explanation is: "well, there's nothing to explain. You were simply determined to lose by the initial conditions of the universe. It couldn't have gone any other way."
This eventually led me to a quote from Anton Zeilinger about his dislike for it.
> I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.
My question is, does it matter if we are seeing a false picture as long as the result of experiments within it are consistent and lead to new discoveries that themselves are actionable? If everything we experience is in this “false picture” is it really false or simply a different set of rules based on underlying circumstances.
I think the whole point being made by the authors here is that QM is dead-ending and that superdeterminism could give us more answers, not less. Why not see if it leads somewhere?
Superdeterminism is like the whole existence being a Haskell program without I/O. To get anything actionable we'll have to logically separate parts of it...
In general all these interpretations are more popular on the technically literate web going culture than in actual research in physics where Copenhagen style views still predominate.
For the simple reason that all of the other interpretations (Many Worlds, Bohmian, Transactional) only somewhat work with Non-Relativistic QM not with QFT. Only Copenhagen works with QFT.
No no, arguing is important. It lubricates an essential step in the scientific process. No one can design experiments, let alone predict results or even understand results, without understanding.
Let me clarify. In this particular case there has been plenty of arguing already and no progress in more than 50 years. So I'm suggesting it is time to start experimenting more.
The authors don't argue that there's not enough experiments, though. They argue it's wrong type of experiments that are conducted.
To quote Dr. Hossenfelder directly:
"In standard quantum mechanics the measurement outcomes will be non-correlated. In a superdeterministric hidden variables theory, they'll be correlated - provided you can make a case that the hidden variables don't change in between the measurements." [1]
That last sentence is the catch here: in case the experiment fails to show any correlation, it can always be argued that the hidden variables changed for whatever reason. If the calculated theoretical boundaries (e.g. temperature & measurement time) are insufficient, there's still no way of telling systematic errors from falsifying the initial hypothesis. It's little details like this that theorists can hide behind while still shouting "Foul!" from the peanut gallery.
Since experiments cost time, money, and pin down talent, research facilities need to be picky about what they test. "Because I like it." [2] is not the most compelling argument when trying to make your case ;)
I wonder whether crowd-funding would work in this case...
This has been the mainstream position for most of those 50 years. Working on the "foundations of physics", which is what you call "arguing" was considered disreputable and career-destroying for a long time. Read, for example, "Something Deeply Hidden" by Sean Caroll for more about this.
A lot of really really smart people tried to solve this by experimenting more. It didn't work. It's time for philosophy again, and in my view, also to accept that the weirdness is not going away. Nature doesn't care about what we find weird or not.
I've been partial to superdeterminism for a long time, but I have no idea how one would go about testing it. In fact, it seems as unfalsifiable as many-worlds or other theories. Do you have a suggestion?
The kind of superdeterministic theories proposed by t'Hooft are falsifiable. From [0] "If engineers ever succeed in making such quantum computers, it seems to me that the CAT is falsified; no classical theory can explain quantum mechanics." By "such quantum computers" he means computers that can run Shor's algorithm. "...but factoring a number with millions of digits into its prime factors will not be possible – unless fundamentally improved classical algorithms turn out to exist."
As for the author of the article I've never seen a clear proposal but it appears the idea is to do repeated measurements that display quantum effects while reducing noise as much as possible and check if there are deviations from quantum theory.
My understanding was that many-worlds could be tested experimentally if we were able to set up large objects in superpositions, and I thought that there is no reason to expect that it isn’t physically possible to do so (we just don’t know how at the moment).
You can’t get any information from that because the main interpretations all make the same predictions from an outside perspective, which is what you’ll expect to see.
Besides to put a person in a superposition may require a machine as large as the universe, so you get issues with the speed of light.
The only way to test MWI is multiverse immortality - many worlds means you should expect to always have some future experience - there is no real death.
Hossenfelder proposes[1] to measure non-commuting variables in identical systems in as noise-free an environment as possible.
In standard QM says the measurements should be completely uncorrelated, but she argues that in a superdeterministic theory results somewhat correlated.
The progress is stiffled by belief that this is not a problem and there should be no arguing and no experimenting and everybody should shut up and calculate.
Not just the way it's arbitrarily hard from an engineering perspective to design an experiment to disprove string theory. It is theoretically impossible, the way it's impossible to distinguish between Copenhagen and many worlds.
No, you actually can. The authors of this article discuss the issue more in their longer paper here: https://arxiv.org/abs/1912.06462
The idea is that superdeterminstic theories are deterministic, while quantum mechanical measurements are random, so you should be able to set up an experiment where QM predicts you would just get random results, but actually you get the same result each time.
There are many classical physics experiments you could do where you understand all the physics and formulas to excruciating detail and you control every variable but you still can't get the same results everytime. Things like throwing dice, as mentioned in the article, or the chaos generated by flowing fluids. Given that any superdeterministic mechanisms would be even even more complex and weird and unknown it seems impossible to disprove. Any failed experiment could be excused by saying that there are more unknown uncontroled variables.
Well, the determinist would argue, that, if you cannot predict the result reliably every time, you do not actually know all the variables, even if you think you do.
That's the whole point right? The hope is to figure out what the hidden variables were, to make better predictions than quantum mechanics could. To un-hide them, as it were.
Determinism simply means the nature of things is deterministic, that's all. In other words, that is: given all initial conditions you can determine the exact resulting conditions.
Bell coined the term "Superdeterminism" to describe to others of theories that evade his own theorem - theories which are absolutely and completely deterministic. Theories that are only partially deterministic - don't hold up to his theorem. Hence he coined this term to highlight the difference (to those that fail to understand), which again is absolutely/completely vs partially deterministic theories.
As so there's no real difference here. If you understand determinism, then you know that a "partially" deterministic theory is not actually deterministic...
I see the difference between determinism and superdeterminism but it's unclear to me why, if you accept the former, why you might not accept the latter.
I think it's worth thinking through and delineating superdeterminism to its utmost limits even if I wouldn't necessarily say I find it compelling.
I do wonder why the authors are so quick to reject nonreductionism though, as nonreductionism seems fairly reasonable to me. Maybe I have a different idea of nonreductionism, but it seems to me that rejecting nonreductionism is akin to accepting Laplace's demon which as far as I understand has been disproved. Basically, at some point the information in a system supercedes that of any system that might represent it faithfully, in part because of measurement effects -- there's a lot of parallels with QM issues.
The problem is determining what determinism determines and how it determines it. And that's a precondition before you can even consider superdeterminism.
There is nothing in classical physics that suggests the universe is deterministic on cosmic scales. There's plenty in physics which suggests it isn't.
If you want to propose any form of determinism, be it superdeterminism, a bulk universe, or any of the other popular variations, you have to start by proving that causality is infinitely precise and absolute. Because otherwise your causality is partly random and therefore not truly causal at all.
Our experience of causality suggests that real measurements have limited precision, and predictions can only be made on limited timescales.
So anyone who is proposing superdeterminism is claiming that this can be fixed - by hidden variables, with noise-free super-realistic precision, which allow a universe-wide predictive horizon.
Free will is a side issue here, because the problem doesn't go away even if the universe has no observers.
The problem isn't whether free will hides super-predictive hidden variables, it's whether it's plausible that super-predictive hidden variables exist at all.
If you believe they do, you have a first-order universe in which these mysterious entities operate with effectively infinite precision behind the scenes, to create a second-order universe which has limited precision in practice.
Of course that may be happening. But it seems like quite unlikely.
You believe that magic ad-hoc random variables are more plausible? And that somehow for whatever non-causal (so magic) reason they follow the same probability distribution?
This is clearly epistemologically weaker, but seduce more the wishful thinker mind.
I think for these people the detail just doesn't matter to them. They see near random behavior which is currently impossible to model and they shrug their shoulders and call it a magic, ad-hoc random variable.
And to a degree, they have a point: Does it really matter if we can predict the exact time and location of an alpha particle as is exits a black hole as hawking radiation? What good does modeling this phenomenon accurately give us?
At some point all that "nah what's the advantage of knowing that exactly?" would pile up and we throw away to many questions of how things work, limitting progress. I think that these kind of things are what scientists want to know. "We" want to know everything and how it works, no?
> you have a first-order universe in which these mysterious entities operate with effectively infinite precision behind the scenes, to create a second-order universe which has limited precision in practice
There are analogues to this in mathematics, for example, our formulation of the Fourier transform has limited precision but the physical phenomenon it relates to has no reason to be limited.
The limitations of the Fourier transform are intimately tied to the uncertainty principle: https://www.youtube.com/watch?v=MBnnXbOM5S4 Reality does appear to be limited by the limitations of the Fourier transform.
Not so fast. The uncertainty principle has narrow conditions outside which it doesn't apply. If you don't satisfy them, you can game the whole system and measure anything you want.
If you can "fix" what that video describes, you'd better start preparing your Fields medal acceptance speech. That you can "get around" it sometimes doesn't remove the underlying math.
Funny, a couple of days ago I googled about superdeterminism again to see if there are any developments and came across Hossenfelder's and Palmer's paper on arxiv https://arxiv.org/abs/1912.06462 . Strangely enough, even though I've studied theoretical Physics, the fact that some of Quantum mechanic's claims are based on assumptions such as the fact we have free will, were never really discussed except in a course in the Philosophy of science I took, which, unfortunately, was not very scientific. I actually never heard about superdeterminism until I read one of Gerard 't Hooft's papers (see e.g. https://arxiv.org/abs/1405.1548https://arxiv.org/abs/1709.02874). Universities should put more emphasis on teaching the things we take for granted and give students the opportunity to question them, if we want to further our understanding.
Superdeterminism in general is a pretty absurd idea. It basically says the measurement settings you choose were predetermined. But you can use a random source from another galaxy to select them. So something like the configuration of stars in another galaxy must be conspiring to help you choose just the correct settings to fake the results QM predicts, instead of QM being actually true.
In QM, configurations may be "close together" in a way that defies intuition. An example is spatially separated, entangled particles.
So why can't configurations be "far apart" in a way that defies intuition? We model measuring at 45.01° as a perturbation of 45°, but perhaps these are very distant configurations.
I do not understand your statement. Superdeterminism does not imply QM is wrong. You can not dismiss superdeterminism with the pretext that it would render everything pointless as our paths are predetermined. Basically you are arguing that you choose not to consider superdeterminism because you have free will. What sounds more absurd? I suggest you read the original paper for a better reply to your argument.
It's not literally saying QM is false in the sense of "gives wrong predictions", but it is saying it is not "what is really going on", because it is incomplete.
> it would render everything pointless as our paths are predetermined
The objection is subtler than this. It's that it's pointless because it's not a productive stance. Science is in large part about predicting results of interventions. It throws its hands up and says everything happens for essentially conspiratorial reasons, and taken fully doesn't admit the possibility of interventions. Further this is stronger than the normal determinism of classical mechanics -- there, even if we believe in determinism, nondeterminism with respect to unobserved things (such as experimenters brains) is a useful stance for discovering truths about the universe. In contrast, with superdeterminism, any possible "intervention" in this stance is "compensated for" by the initial conditions. It explains quantum mechanics only by saying "initial conditions did it", which is no better than "God did it" of medieval philosophy. In neither case can we usefully ask further questions.
If you go down that route, you can always dismiss objective reality. There is no way of knowing that what we experience is actually happening. But we still assume it does. The same holds for superdeterminism. You can say we have no way of knowing if what we measure is not a conspiracy, but that doesn't hinder us from doing science as we know it.
We paradoxically could have free will and still live in a purely deterministic universe. Even if it is deterministic, we could never do the math to determine the exact state of everything. Our inability to model the universe (without a universe sized computer) means for all intents and purposes we do have free will. Even if the arrow of time runs backwards and our perception is forward, I'm still going to pick what I eat for dinner tonight.
What is the difference between you picking what to eat and a computer 'picking' what 2 + 3 is? What you pick to eat is determined by your taste, mood, available food, a bunch of subconscious processes in your brain that you're not aware of and many other factors. If we ran a simulation of the same deterministic universe you would pick the same thing every time. Just because you don't know exactly why you did something or you aren't able to fully rationalise your choice, doesn't mean you have free will.
There is so much wrong with this. It's clear that you're essentially presupposing there is either "free will" or "determinism" when in fact the right distinction to make is "free will" as opposed to "no free will".
Anyway, when you make a rerun of the universe from the same initial conditions, you get randomness because of quantum mechanics, so the future outcome is not exactly the same, and you can't predict anything with certainty because of probabilities. But look, all of this has nothing to do with free will. Neither determinism nor quantum mechanical randomness give you an absolute-metaphysical-libertarian superwill when you're not a subject to the laws of physics at all (unless you believe that you're a soul/cartesian ego/some other supra-physical mental entity with dubious ontological status). You're basically arguing against this abovementioned concept. But actually default, regular free will is just an effective description of reality where persons have volition, and it exists as an emergent rather than fundamental thing.
Before you start to make the same argument that free will doesn't really exist, consider the question: does Hacker News exist? Well, duh, of course not! There are no websites, no Internet and no computers, it's obviously all just fundamental particles acting in some ways, you know, just the wave function of the universe deterministically obeying the Schrodinger equation, etc. Naive reductionism. But here we are, reading Hacker News. Guess what, you don't live on a level of fundamental particles. Does, for example, chess exist? Your argument implies that it does not, but here I am, playing chess in a separate tab.
So, do persons exist?
Does free will exist?
It strikes me that people don't bother to make real arguments against free will, like a psychological one, for example.
> It's clear that you're essentially presupposing there is either "free will" or "determinism"
Yes, free will doesn't seem possible in a deterministic universe.
> Anyway, when you make a rerun of the universe from the same initial conditions, you get randomness because of quantum mechanics, so the future outcome is not exactly the same, and you can't predict anything with certainty because of probabilities
Then the universe is not deterministic.
> Before you start to make the same argument that free will doesn't really exist, consider the question: does Hacker News exist? Well, duh, of course not! There are no websites, no Internet and no computers, it's obviously all just fundamental particles acting in some ways, you know, just the wave function of the universe deterministically obeying the Schrodinger equation, etc. Naive reductionism.
No, there is a difference between something existing and free will. A computer can calculate an answer to some query, and the answer exists, doesn't mean it was generated through the computer's free will.
I never see people discussing the statistical aspects of free will.
For instance, I may decide to have an apple tonight, or spaghetti, or whatever. Thus, I seem to have free will. But if one collected statistics on what I ate over time, there would be patterns and it would be much more difficult for me to overcome those patterns with "will". The more time and events you look at the more you see things like unconcious maintenance of weight, preferences of types of food, and so on.
Yet the long term patterns are made up of the individual choices that seem free.
I have this vague idea that some further exploration of this might be compared to the statistical ideas of quantum mechanics.
That's because it's not an interesting thought as related to the notion of free will. Everyone accepts that humans have subconscious biases that impact their decisions. The discussion of free will is higher level than that. The fact that you can't will yourself into not breathing is not a refutation of free will.
The question is essentially, when all biases are accounted for, is there some aspect of free will that remains? You experience free will constantly, and you assume it in all interactions with other agents. Is that an illusion, are we just puppets in a play? Many philosophers believe that it isn't, even if determinism is real. I'm not sure if super-determinism is still compatible or not, but it may well be.
I mean, I can will myself into not breathing. I used to hold my breath between subway stations. But that's about as long as I can do it.
I feel like there's some sort of analogy between how you can have local violations of conservation of energy where particles pop into existence from nowhere, but longer term it has to even out.
My problem with compatibalism, is compatibilism is so rigid. Why can’t we just overtly define free will in such a way that we have “free will,” and at the same time acknowledge that we lack “free will” for another definition. I guess what I’m trying to say is compatibilism just seems to me like it takes a hard stance on a word game.
I agree. Compatibilism either defines "free will" to be something I don't consider to be free will, or defines "determinism" into something I don't consider to be determinism (or both).
Another definition like "nondeterministic free will" doesn't make much sense, so when you say such will does or doesn't exist, it's unclear what it means and what conclusions should follow from it. Some people assume it makes sense, but only until you question it.
I see it more as if the universe was a 4D picture, with time as the 4th dimension, any concept of time "going forward" is just an illusion by how our brains work (we need time to compute every moment), so the outcomes are already written, we just havent seen it.
I have long been a fan of the 4d fixed universe. If you think of fractals you can get a hint at how everything is interrelated. As you indicated, the fundamental issue is why we experience the passage of time. I dont think it's an illusion, but the most important question.
Here's a thought experiment, similar to the one-electron universe thought experiment: there exists one particle, the "consciousness particle," which randomly jumps around throughout 4D space. Whenever it appears at a 4D position, consciousness is experienced as computed by integrated information theory (or a similar model) [1].
When you experience a moment of consciousness at a specific 4D position, you experience all the memories of the "past" that are in your brain. Thus, the consciousness particle interpretation of consciousness is compatible with your direct perception of living a 3D life along a time dimension.
The way I get is that we are measuring something with itself. Measuring an unknown attribute with something (actually the same thing) having the same unknown attribute. We cannot predetermine the settings of the measurement as we do not know its attributes or parameters. We have no means of calibrating something we do not know. It does not have to be an atom from a distant galaxy - very well may be, we do not know yet - but unknown underlying interdependence locally. The superposition of unknown number of underlying unknowns combine into what happens when we measure an unknown with an other unknown. We measure our universe with itself, the taste of an apple with an other apple.
This always seemed like the easiest way to explain superdeterminism. It's a universe where the laws of physics do not permit you to generate a truly random number.
The distribution of the experiment results, say correlation in a Bell test, should not matter (significantly) on the choice of seed. But the single electron being sent through the experiment still needs to "know" what the outcome of my PRNG is in order to "know" if it should be spin up or spin down this time.
But the measurement setting (PRNG outcome) can be arbitrarily removed from the seed value (by combining PRNGs in various ways). So that doesn't sound very convincing to me.
It's unclear if you mean PRNG (pseudo random number generator) which is already deterministic, or a true random number source (eg, radioactive decay or something).
If you mean true random, then, yes, superdeterminism does indeed mean that number is predetermined.
Math still works though - there is no way for someone to peek and see what that predetermined number is so it is still random for any observer.
Is there any coherent fixed theory of superdeterminism at all? Something with a system of equations that are nailed down and that one could build solid thought experiments on top of?
The article mentions it but kind of downplays the fact that the prevailing theory of QM has a very clear mathematical formalism without any wiggle room, while superdeterminism doesn't have this.
That seems like a far better explanation for why QM has won, and makes me wonder whether superdeterminism wouldn't just be the next string theory.
It makes a strong case IMHO that there is some meat here. The main idea is that Bell inequalities make ~4 assumptions, which Hossenfelder and Palmer argue can be meanifully weakened in precise ways that have the potential for testable predictions.
The paper also spends some time dispelling the FUD that stems from the unfortunate associations made with the name "superdeterminism."
It's only ~20 pages of mostly prose, so if you're interested, I highly recommend the quick read.
Superdeterminism isn't supposed to be an alternative to QM, usually it's an alternative interpretation of QM which allows hidden variables from what i've gathered.
To a computer scientist, superdeterminism seems like the most elegant solution to most of the current problems in physics. But it has always been firmly out of the mainstream, perhaps because it runs directly counter to our human experience. Gerard t'Hoofts framing of the universe as a cellular automaton (https://arxiv.org/abs/1405.1548) is relatively intuitive, but still only a rough sketch. Hopefully, Hossenfelder and Palmer now publicly arguing for superdeterminism will recruit some more bright minds to fleshing out these models into workable theories.
Two statistical dependent variables "can be made" independent, for example by feeding one into an PRNG. This seems like a similar cop-out as the "particle changes behaviour when it is observed hence consciousness is needed in the universe"
> Two statistical dependent variables "can be made" independent, for example by feeding one into an PRNG
No statistical test can assure statistical independence for all possible cases. The very fact that you pipe it through a PRNG means the output is deterministically correlated with the input, because PRNGs are deterministic, and some statistical test will be able to detect it. At the base level, a test that tries every conceivable PRNG, for instance.
Who says the universe can't "know" all the ways? That's like saying momentum can't be conserved because the universe can't know all the ways that we could transform momentum into other forms of energy and back again.
Momentum comes from the interaction of forces, there's nothing that's "needed" to know. Two billiard balls colliding and a billiard ball colliding with a ball of mud conserves momentum (in a vacuum, in space, etc) the same because it all boils down to forces. F=dP/dt
That's very different from "regardless how you throw the dice it will always "know" what you did".
In other words, we invented some concepts (force, momentum, energy) that exhibit a symmetry under various transformations.
It's only "very different" in superdeterminism because that which is conserved doesn't yet have a widely accepted formulation that you've internalized the way you've internalized the other common concepts in physics.
Superdeterminism appears to aspire to come up with a classical model that would explain quantum correlation. If successful it would render a quantum computer to be fancy, highly parallel, very expensive but still a classical one. A non-classical superdeterministic model would just substitute one mystery for another (sneaked in non-locality or something).
Everything old is new again. Superdeterminism is basically the concept that everything follows from the initial conditions of the universe in a deterministic fashion. In short, it's a clockwork universe below the quantum level. It is compatible with Bell's theorem because it's not local hidden variables, it's global hidden variables--i.e. the state of the entire universe.
The problem is, according to my current understanding, superdeterminism cannot be tested.
This is the my first time reading about Superdeterminism and the author hits on my of my amature/novice intuitions on issues like the measurement problem, hidden variables, and Bell's theorem (call me weird but these are topics I enjoy reading, studying, and thinking about--technically and philosophically).
I'd be very interested in hearing and explanation and understanding why superdeterminism cannot be tested (it's not clearly obvious to me how that's the case) because if that is the case, it would explain why it was largely unpursued/undeveloped (similar to what someone else in this thread posted--if it's true, it supposeldy offers little useful insight beyond better explaining issues in quantum mechanics by replacing one blackbox of uncertainty with another).
Based on the author's description, much of the issue is that the theory has had little attention and as such, is largely undeveloped (and therefore, isn't going to be developed enough for experimental testing).
> I'd be very interested in hearing and explanation and understanding why superdeterminism cannot be tested
What experiment could you run in a superdetermined universe that would distinguish it from a non-superdetermined universe? Vice versa? There's nothing.
One thing that I am disappointed is not discussed in the article are the experiments done to validate statistical independence based on extremely old and far-away phenomena. There have been experimental validations of Bell's inequalities with measurements chosen based on distant quasars, and they held up very well. This would suggest that the state of two particles we just fired is caused by reactions in a quasar millions of years ago, or by even older phenomena.
This suggests there isn't even some kind of 'local-ish' theory of hidden variables, like in the case of throwing dice, and we would really need to account for the state of the whole universe for all of its history to accurately predict what happens when we type at our keyboards.
> This suggests there isn't even some kind of 'local-ish' theory of hidden variables, like in the case of throwing dice, and we would really need to account for the state of the whole universe for all of its history to accurately predict what happens when we type at our keyboards.
Yes, because all particles in the universe share a common entangled past because they all originated from the same source, ie. the Big Bang.
I think progress in physics is going to be on hold for a while. This kind of fundamental scientific inquiry does not proceed in a linear fashion, and there was incredible progress in the last century. Physicists seem to realise this on some level - in the last 5 years I have noticed physicists turning up in other fields, particularly origin of life and consciousness studies.
I think this becomes even clearer if you look at what the major recent experimental results in physics are.
The two that come to mind are gravitational waves and the Higgs Boson.
Gravitational waves were observed by LIGO and Virgo. These observatories were built in 1993/1994, but did not observe a gravitational wave until 2015 (many have been observed since then).
VIRGO costs ~10mil Euro/year to operate and is staffed by over 300 people.
LIGO had an initial budget of $395 million in 1994, with a $200million overhaul in 2015.
The Higgs Boson was detected using the Large Hadron Collider. The LHC began in 1995, and would not discover the Higgs until 2012. The LHC had a construction cost of $4.4 billion, with an annual operating budget of $1billion.
Most of the discoveries of the 20th centuary did not require anywhere near this scale of experiment. There is still some room to grow here (more powerful colliders and more sensitivy gravitational observatories are being planned), but fundamental physisics seems to be fast approaching the limits of our current engineering capabilities; and may need to enter a quite period while it waits for the more applied sciences to catch up. Or, worse, the laws of physicis end up being such that the "next step" of experiments is simply outside the range of what we could conceivably build.
For theoretical physicists that may be that case. I think it is intuitive that the cost of new discoveries increases the more we know about a universe that probably has a limited amount of laws.
But on the other hand there are still unlimited opportunities for applied physics without having to go into more metaphysical fields. Material and energy research, optics...
I think we are still more or less brute forcing optimal lens setups for specific applications with complex simulators to fit specific applications. Useful to be a physicist in that case.
Ok, particle states are correlated in a way we've missed. Fine. Every set of particles has fewer degrees of freedom than we assumed. We can check experimentally: the heat capacity of your set of particles is proportional to the number of degrees of freedom.
It would be nice to see something about this point in the essay.
Sure, but it's the same issue. In a quantum system some degrees of freedom are "frozen out"--the energies are much higher than (Boltzmann's constant)(Absolute temperature) and they become irrelevant. But the accessible degrees of freedom show up in the heat capacity.
When the Cv of a diatomic gas goes from 3/2R to 5/2R there is an intermediate part that is complicated. A photon gas has also a strange Cv. I think it's not easy to dismiss superdeterminism using just the Cv, because sometimes Cv is more complicated.
[Disclaimer: I really really dislike superdeterminism. I just think that Cv is the wrong reason to dismiss it.]
For sure. The transition from 3/2kT to 5/2kT is a good example of a frozen-out mode getting "unfrozen". Within the transition we have to count density of states and not screw around with classical analogs. ( I'm an optics person, so I think the Cv of a photon gas is normal and understandable ... If I take half an hour to write it all out again!) This is technical stuff we're talking about ... but it's not difficult for particle physicists to address!
Anyway I don't think I'm dismissing the idea of "superdeterminism". I'm only puzzled that directly measuring degrees of freedom, which was a central reason for adopting quantum mechanics, isn't even mentioned in an article where the public is being told there are fewer "real" degrees of freedom than generally thought.
For "degrees of freedom" substitute "density of energetically available states." We're taking some states off the table, we have to explain whether there are measurable consequences.
I don't understand superdeterminism and I'm not qualified to dismiss it.
> Even more importantly, no one has ever proposed a consistent, non-reductionist theory of nature
It seems the point of non-reductionism is that you can't propose such a theory.
For instance, if we were in a simulation that decides whether to optimize quantum to macroscopic transition based on what we're looking at, we're simply not going to find what causes wave functions to collapse because there isn't an underlying rule to find. We're "observers" just because we're flagged as such. For us in the simulation, certain things just happen for apparently arbitrary reasons because we can't peer into the source code.
It'd be like a character in an FPS trying to figure out why shooting breaks some things but others are indestructible.
Non-determinism requires some kind of RNG which is truly random (TRNG), right? So the next value can't be calculated from the previous one. This means all values of truly random RNG just exist as we don't need to iterate all of them in order to calculate what's next? So truly random RNG is just pre-determined instantly from step one and RRNG is as pre-determined as TRNG?
I hope we will see more sci-fi take the route of super-determinism as opposed to multiverse theory.
The latter is a nice plot device, but its wide spread use in pop culture has overblown its likelihood of being correct.
Maybe in a thousand years from now, people will begin their analogies with: 'Long ago, people used to believe the universe split up infinity many times at an infinite scale'
---
I believe the idea that we have to give up free will or that fate exists is unhelpful at best.
If super-determinism is correct, then there is nothing to give up.
But more practically, if complexity theory is correct in that some things are inherently complex, there is a case to be made that the universe with its variables 'locking' into place from moment to moment is far beyond predetermined from the perspective of anything in it.
Superdeterminism doesnt obviate the multiverse, just Everett's many-worlds interpretation of quantum physics. There are still three other levels of multiverse according to Max Tegmark. See: https://space.mit.edu/home/tegmark/multiverse.pdf
I don't believe in free will because it doesn't make sense when you understand determinism. You cannot be free when every thought & action is the outcome of the past forces exerted upon you.
My understanding of superdeterminism doesn't necessarily rule out the multiverse theory. Maybe my idea of the multiverse theory isn't traditional but I think the universe can very well identically repeat and or repeat but be somewhat different by how forces can be occurring at different intervals than the past cycle of the universe.
edit: people downvote on this site because they don't like thinking of not having free will lol.
Your explanation of why you don't believe in free will is very philosophically unrefined, and your smug tone (implying people who do believe in free will must not understand determinism) is not helpful.
The reality is that a great many philosophers believe that determinism and free will are perfectly compatible. There is also supernatural defense of free-will, as in many religions (where the physical universe may well be deterministic, except for interaction with a spiritual world of will, which may produce physical effects without a physical cause). Both of these ideas are example of people who believe in free will even if they do understand (physical) determimism. There are probably other groups as well.
No. It's nonsensical to think a person created in the universe is separate from the universe. There are no great philosophers that believe in free will. It would be only wishful thinking with ignoring logic to assume you can make a choice that's truly your own and not the outcome of the system we're in. Furthermore, neuroscience illustrates determinism (no free will) and physics has as well until people lost their minds with quantum physics that's grossly unfinished. please email me if you want to discuss in great detail.
> It's nonsensical to think a person created in the universe is separate from the universe. There are no great philosophers that believe in free will.
And yet compatibilism is a common philosophical belief ever since ancient times, supported throughout history by philosophers such as Schopenhauer, Thomas Aquinas, and Daniel Denett.
The essential idea is that what we understand by free will can exist even in a purely deterministic naturalistic universe - mostly, free will should be understood as the ability of any agent to act based on its motivations. So, for example, an artificial neural net is in some important sense free to act as it 'wills' to solve a problem, even though it is following its programming.
Compatibilism is defining free will differently than how most people think of free will. Free will is an illusion in the traditional sense and thinking of compatibilism changes everything but doesn't give free will in the traditional sense.
The agent cannot make decisions free of what the system has influenced. So, it still is nonsensical to think a person has free will in the traditional sense. People are not making their own decisions when they couldn't have been different from the result of the system they reside in. I do consider Schopenhauer a great philosopher but what most people think of free will isn't a possibility. Compatibilism isn't even worthwhile because it doesn't make determinism & free will compatible but instead just redefines free will to delude people into thinking traditional free will is a real thing.
It might not bring back the magical kind of idea where we assert our will over the physical world, true. But if people really believed in that, they would also believe in actual magic, or telekinesis (of course, quite a few do, but there's really no arguing with some people).
But the compatibilist free will has most of the characteristics that we associate with any kind of free will. For example, there is still value in arguments, as hearing one agent's argument can well be the cause of the other agent's change in behavior. By the same token, it still makes sense to hold agents responsible for their actions and punish them for their decisions, since the cause of their behavior is very much related to their 'person', the sum total of what they have experienced, learned, and have projected about the future through their own rationality.
In fact, it's hard to find anything in the compatibilist free will that contradicts intuitive notions of free will, except for the most religious of intuitions, unless you push it to the brink and try to ask questions where we don't have good intuitions anyway, usually by asking counterfactuals, or moving to the relationship between free will and consciousness.
There is also another kind of approach to this question, one I first heard from Noam Chomsky. His point was: if determinism were to contradict our most immediate experience (as discussed before, it may not), the one of deciding how to interact with the world, wouldn't it make more sense to say that we are missing something from our scientific understanding in this area, rather than insisting that the most common empirical observation we have is completely wrong?
My opinion is compatiblism (like it or not) does delude people into not understanding they have no control over how they came to be as a person, the wealth they accumulate, the relationships, the awful things that happened to them if they do, all these things were by fate, and that people deeply believe the "thought" of realizing free will is an illusion equates to an unpleasant existence or some nonsense of realizing consciousness is really fake (in a sense similar to the colour red changing to orange with yellow added).
I'm sure some people read about compatiblism and shortly later go back to believing the traditional free will nonsense. I don't think many people think about not having free will for much time at all. Thinking about determinism for awhile makes it hard to not understand predeterminism. Progresses into making it fairly easy to understand we're just part of a complex system and the system works from a collective mesh of subsystems being "us" to the smallest thing.
We're similar to the concept of robots. The universe wrote code for the instructions of us. Similar, we made the robots "code" and people don't consider the robots having free will from our "human code" functioning the operating robots. When people have their thought on robots, they think of determinism subconsciously and think "no.., the actions are really from us, who coded & setup the robots" while the residing of the robots has environment external forces interacting with the robots' outcomes.
I personally think the world is good but resembles evil and overtime becomes less resembling evil. I further think the universe repeats more likely than not ever repeat throughout infinity. A good majority of people don't realize they don't have free will; that's keeping evil resembling experiences around longer on earth from what I observe and while people are punished severely. They're never being told they had no free will required to have a better outcome or being given the time observing how their mentality came to be that resulted in the bad outcome.
Most people don't even experience what the punishments are from unfavourable: genetics, financial status, health condition, and whatever life variable factoring into conflicting with the system of society; outcomming in not living a mediocre or higher status life but an awful one.
So yes, I get freaked out when I even observe all the travesties in society, that likely would have more empathy if people understood "success isn't earned" but given to you at birth and like everything else that follows after birth. Awareness of how the universe at the moment of the creation, resulted in the future (sad or happy) outcomes from the summation of sequential forces upon everything and then thinking of the more privileged vs misfortunate situations in comparison.
The forgoing makes me think the system of society would adapt to be more compassionate than the opposite incorrect belief of traditional free will. It's like when people thought it was better to think the world was flat. Delusion isn't better than realism. Eventually, leads me into thinking people can eventually be more likely to agree on universal healthcare and even futuristic things like universal income & homes.
So I think free will belief needs to be killed sooner than later. Even though whatever happens is what fate already decided on upon creation. So I'm somewhat hopeful people examine this topic.
I think your perspective on free will is somewhat closer to my own then I originally thought. I completely agree that outcomes in life are determined much, much more by the world than by any kind of personal responsibility. I think that there are plenty of people who believe even in magical free will (say, christian concepts of it) who actually understand this same thing, though you are right that there are many who don't.
However, I don't believe in full predetrminism, and you don't either. If you did, you would not think about improving the world, or convincing people of things - if you believe that the next speech by the president was determined at the time of the big bang, then thinking about change is meaningless, and none of us can help feel or not feel however we do.
However, if you believe in a world where the future has not happened yet and it evolves more like a complex computation, with room for changing program code by the program itself, perhaps even with some randomness thrown in, you get a fully naturalistic deterministic world where nevertheless you can try to influence things in some direction or another.
To articulate my own belief about this more clearly, I think the example of robots you gave is very good. I believe that even a robot with a simplistic machine learning algorithm can be meaningfully said to have a kind of free will, in that it can do a better or worse job at what it was designed to do based on the examples it is given and on accidents of its training process. Two such robots may well have different beliefs about the world (in a very basic sense)and they could even influence one another based on their experience (training set) and conclusions (parameters of their algorithm). This is how I believe humans and animals work as well - we have a pre-determined algorithm (vastly more complex), with different starting parameters between different people, we have a training set consisting of all of our lived experiences, and our algorithm can modify itself or its parameters during training, in pre-specified ways. This does end up meaning that some people end up with better models/algorithms than others, based on better starting conditions, or on more luck with experiences. And decision points in this algorithm are what I think represents our experience of free will.
> However, I don't believe in full predetrminism, and you don't either.
I actually do believe in complete predeterminism.
Example: everything you & I wrote was fated to happen and similar to our thoughts on the subject. Multiple persons I've conversed with (even hard determinists) will express similar opinions as you "well then your thoughts on improvement don't matter because if the universe is functioning under under predeterminism, well it was fated to improve if it happens" and then try to use that assertion as an argument in some way to make a rhetoric against my personal thoughts or the discussion.
I personally, enjoy reading about what you wrote to me and similar to myself writing about the subject. Otherwise I'll never learn something new on the topic. I don't care knowing it's all fated and same are my thoughts on thinking the world slowly improves without me having a real will separate from the system.
My thoughts continue to be, I'm a person that enjoys learning about this topic and that requires conversing about it. I think the understanding of free will being an illusion, will one day improve society exponentially faster than a universe that didn't result in people coming to realization sooner. The majority of people just need to function with understanding of the illusion to the point of understanding complete predetermination.
> However, if you believe in a world where the future has not happened yet and it evolves more like a complex computation, with room for changing program code by the program itself, perhaps even with some randomness thrown in, you get a fully naturalistic deterministic world where nevertheless you can try to influence things in some direction or another.
My idea is that the universe is more probable to repeat than not repeat. I do have wishful thinking and it makes me agnostic. I like to think if there is a higher power that people typically name as a God. Well, God cannot do the impossible like making traditional free will be real. So, I like to think that the universe repeats with adjustments made after it runs its course. I take a position close to Einstein, such as once the universe iteration starts, God doesn't interfere, and I acknowledge that belief is impossible to prove. I only assume my thought on that are more probable from the horrible things that happen to people and this is under the assumption a higher power wouldn't want suffering in the complex system created. So again I think after universe runs the course for humanity, it will repeat and there will be adjustments so things improve for the previous stories the humans experienced.
The forgoing will now provide you with different thoughts on what I previously wrote. The last part of what you're expressing is how our brains function and we could describe nature similar to what we define as evolution. But I wouldn't say that's free will. My idea is free will isn't a possibility even if I die, ..the universe eventually repeats, and I live again from the recursion but with new improvements making everyone have a happier story; created from an higher power understands my desires in the previous life I lived. That's not anymore free will because every universe iteration would be fated from the previous summation of forces.
Good point. This seems to be the exception though. If it's fate to be shown that fate exists, some might find that disappointing. Others might find it a relief. Your happiness or sadness at this revelation would also be predetermined. An extreme form of bondage.
If superdeterminism is true, it appears to imply that the universe is discrete at the smallest scales. Were the universe continuous, 'definite' position would be impossible, meaning that certainty itself would be impossible. Everything would be 'fuzzy' meaning superdeterminism would be impossible.
This is easily falsified by imagining that the computational nature of physics itself is stronger than a Turing machine. There would be no issue with certainty in a continuum if you were actively computing with the continuum itself as data
But that assertion is based on a set of assumptions that come back to information theory and hence computability.
You are right that in modern information theory exact precision on a continuum is physically impossible (for others: as we continue to subdevide the precision we require more bits of information, which has known physical limits).
But what I think the other posters was getting at is that if the universe runs on a machine that is not bound by those rules, say rules where arbitrary precision on a continuum can be stored as a value (which again violates physicality as we know it but such a machine is "outside" the universe so physicality is moot already), then that is possible.
Which is to say the universe is a machine which can compute things a Turing machine can't (a Turing machine can compute everything that can be computed that we are aware of, ergo if the universe can compute things it can't then the assertion being made - albeit somewhat clumsily - is that the universe doesn't follow the asserts we know).
I understand that if we posit a super-Turing machine in which arbitrary positions on a continuum can be maximally expressed as finite values what I am saying does not logically follow.
However, I would argue that such a super-Turing machine is logically impossible. In principle continuous values cannot be physically manifested with certainty or arbitrary precision regardless of what world we are in.
Positing such a super-Turing machine is like saying "I have a square circle in my pocket".
> However, I would argue that such a super-Turing machine is logically impossible.
I mean we are discussing it so it's certainly logically possible.
> In principle continuous values cannot be physically manifested with certainty or arbitrary precision regardless of what world we are in.
Why would they need to be physically manifestable? Again how can you make assertions about what parts of physicality are maintained by the machine that is computing physicality? How can you make assertions about the world containing our own?
> Positing such a super-Turing machine is like saying "I have a square circle in my pocket".
Except it isn't. It's more like "I launched my square circle beyond the observable universe". If it was in my pocket I could take it out and show it to you.
Positing a "super-Turing" machine is pointless because it isn't testable. But it is possible. I feel like that distinction is important which is why I commented. Which is much how I feel about super-determinism in general, sure it's possible, but how do we test it? It's pointless because whether not it exists doesn't change anything. The issue of the discrete values is an interesting facet of that, one that might lead to something testable, like those "is the universe a hologram" experiments. Establishing what would be required of such a system is useful, but it doesn't dismiss it out of hand.
I guess my issue is that your arguments don't fully embrace the theory so they are bit like trying to disprove the existence of a different god using the holy book of your own god. There are valid reasons to disagree with superdeterminism, but arguing from the lens of physicality misses the issue at hand.
>(for others: as we continue to subdevide the precision we require more bits of information, which has known physical limits)
Ugh. Is information theory mathematics or physics? Nature is analog and doesn't work in terms of bits, it's more similar to Euclidean geometry than Cartesian.
But ratios only express rational numbers, if we had to express location in a 3D continuum it's almost guaranteed that it would be represented by an irrational number.
Fair point and I don't necessarily disagree. I don't like the consequences of superdeterminism, just playing devil's advocate. If we exchange simple ratios for fractals we could have compute based infinite precision. Precision would just be a matter of scale of the measurement.
So if there is superdeterminism, meaning positions are absolutely determined, either there is minimum scale in the universe (discrete universe) or somehow the universe computes with infinities.
Both are absurd concepts! When probing the ultimate depths of reality like this there are no good answers.
Complex solution for a simple problem. I'll sound naive, but I think that quantum entaglement is a very simple thing: it's a link, like an edge in a mathematical graph, between two particles. The length of the link is 1. Not nanometers or something else, just 1. The particles rapidly change their state, but they are synchronized via this link and at any moment they are in the opposite states. Measuring one particle is done by bombarding it with another particle, which stabilizes its state, and also the other particle, via the link. In other words, our world is just a mathematical graph that appears to have continuous properties at scale.
My own theory is that there is no entanglement and no link between the particles, it is just that the particle states are affected the moment they are created.
I also think that light is not a wave and that light particles are not fired in a straight line in the double slit experiment; they are fired at an angle, and then they bounce off the slits and create the interference pattern; and when we put a detector in front of the slits, it only serves as creating new light particles that do not have this angle in them and they go straight ahead, making the diffraction pattern disappear.
And it's not that quantum math is wrong; it's right, but my intuition says that any system can be described by statistics.
As for quantum mechanics applications, all the applications mentioned (from clocks to computers to anything the article and other articles mention) are not based in any QM specific property. None of those applications are actually dependent on entanglement and on collapsing the wave function, and entanglement cannot be used for any application despite what they write in articles because it would violate the principles of relativity. And none of those applications would cease to exist if light was not an actual wave and photons simply moved in wave patterns.
All these people working on the foundations of quantum mechanics must somehow have missed the fact that quantum mechanics has been superseded by quantum field theory. Quantum field theory resolves basically all of the typically interpretation of quantum mechanics questions in a rather sophisticated way. Unfortunately and I guess this is why these "Interpretation of QM" are still popular most things become harder to calculate with QFT instead of plain old quantum mechanics. Still I think that the perspective of taking quantum fields and the path integral as fundamental clarifies a bunch of things even about measurements. In the path integral formulation you can view a measurement as an additional term in the Lagrangian (for example a non-zero external magnetic field for Zeeman-Splitting). You then have to expand around the classical solutions summed over all initial conditions, so if you have for example a spin-1/2-based dependence as it is the case for the Zeeman-Effect, you would have to expand around these two different classical solutions. There is nothing mysterious about this and in the cases where you can carry the calculation out you get the correct predictions. Fundamentally there is no such thing as measurements, just local interactions of quantum fields and any measurement can be either modelled as such an interaction or as an additional term in the Lagrangian.
The truly interesting things discoveries within in the framework of quantum field theory, such as ADS/CFT, going from String Theory to low energy effective theories or the Amplituhedron. All of those have direct and immediate connections to state of the art quantum field theories.
What is done in (https://arxiv.org/pdf/1912.06462.pdf), especially the last part, where they suddenly invoke p-adic numbers in support of a very vague argument and a cartoon of Penrose's impossible triangle in a similar vague manner, won't be taken serious by most physicists, but is sufficient apparently to convince non-technical people to continue funding them. Nautil.us probably is just grateful for the free content.
Do you consider Gerard 't Hooft also one of those non-technical people? Mocking the use of certain diagrams does not give you any credit. I would refrain from such petty arguments if you want to be taken seriously.
About your argument, as far as I know, the measurement problem is not solved in QFT. I have also never heard of adding a term to the Lagrangian to represent a measurement. The addition of the external field to hamiltonian of the atom when calculating the Zeeman effect is something unrelated to measurement afaik. If you have any references on your claims please point them out.
From a brief stint in Wikipedia (https://en.wikipedia.org/wiki/Superdeterminism), Superdeterminism looks like it better satisfies Ockhams' Razor than the standard QM interpretation. All we have to give up is free will, and why should science try to defend that?
The problem isn't so much the lack of free will (that's accomplished by plain old determinism), but by the extraordinary lengths that the universe has to go through in order to make the super part happen.
In a Bell test experiment, it's not just that the outcomes were predetermined, but that the brains of two experimenters, and their experimental setups, were configured in a very precise way to make them choose measurements that would yield compatible results. That is, it's not so much that you didn't really get to make a choice, but that it took such a complicated configuration in order to bring about a fairly simple result.
And since you can't know the entire configuration of a brain, you've really just replaced the randomness with a mystery box. Instead of "John chose freely to set the experiment to up/down", all you get is "John was fated from the beginning of time to set the experiment to up/down -- a thing I learned by watching John set the experiment to up/down". Instead of randomness or free will, all you get is a sage nod after the fact, "Yes, so was it foreordained in the before-times."
It's not impossible. I'm not even really sure it's implausible. It's just that it doesn't seem to inform anything.
Layman here, but one thing I never understood: if we are to believe everything occurred started in a Big Bang, then why shouldn't we expect everything in the universe to be entangled quantum-mechanically? I feel like I'd be surprised if that weren't the case.
Entanglement is not a stable mode for particles afaik, it takes effort not to break apart. Note that it's not limited to two particles, so you have a point, but the effects would be vastly more limited in time, at least in the first degree of causality.
If you however mean that any entire observable universe¹, if cosmic super-inflation is true, must be causally related and thus "pre"-determined, which may be qualified as "super"-determined in a way, then I agree (layman too).
--
[1]: Should we keep -c² as the metric of spacetime, i.e. no causality beyond the light boundary (function of time, in space).
Isn't it more accurate to say that isolated entanglement is not stable?
Basically any interaction with some other particle causes that particle to become entangled as well, and so you very easily get "avalanches" of entanglement that spread at the speed of light.
As soon as the avalanches reaches "the observer", the observer themselves become entangled with the originally entangled particle, which means that from their perspective it looks as though they no longer are. This is similar to conditional probabilities.
The math says so (and the math is right¹), but how we interpret it is another story entirely.
To get back on the example of dices: we might be building a whole storytelling around "probabilistic dices" because we haven't formalized the concept of the hand behind, yet.
I've always pictured entanglement as totally acceptable if you postulate an extra dimension. Then you simply have to picture a link in that extra dimension between the particles, like a 'U' crossing flatland in two 'dots' (particles) would manifest as 'entanglement' for the flatlanders.
But I've really grown skeptical of all such interpretations now (mine very much included), I'd tell you honestly that I probably favor the one that seems most beautiful to me within the space of possibilities (which is what I care about and investigate, the pure science). Interpretation is belief, basically, even if your name is Schrödinger.
____
[1]: Even if eventually found to be partial as the article suggests, I think it's fair to assume any "new" quantum theory would be to QED/QCD what General Relativity was to Newton's equations of motion: precisely a generalization from which to derive Newton for small-ish quantities. Ergo, we can assume QM is 'True'.
That kind of entanglement doesn't really explain the Bell test results.
Entanglement is unstable, and easily swamped. It doesn't go away, but it becomes a very small factor in things. If I had two entangled electrons, and added them to two coins, the two coins are now entangled. But if I flip them, I don't expect the results to be significantly correlated. The classical behavior of the coins massively swamps the entanglement.
So even if you and I were performing a Bell test experiment, and know that the atoms of your brain are somehow entangled to the atoms of mine dating back to the origin of the universe, it's still unreasonable to expect that to lead us to set up the equipment in such a way as to preserve the correlations between two entangled particles in our experiment. There's just too much brain matter, and too much has happened to it from the beginning of the universe until now, for that entanglement to make a difference.
Thanks for the reply! So what I don't understand is, in your explanation, you appear to tacitly assume the rest of the brain matter is (in some sense, maybe statistically) "[mostly] independent" from the electrons we're measuring. As if, maybe they were behaving independently before, and only now have become a teeny tiny epsilon bit entangled due to the entangled electrons they're interacting with. But I don't understand what this assumption is based on; it seems to directly contradict the Big Bang idea. Presumably we all came from the same Big Bang, and were all completely entangled with each other at that point, with everything tied directly to everything else. Why should it be that after the particles got away from each other, this would stop being the case? I just don't understand. To me, if literally everything in the universe was entangled at some point, it will be forever—to my understanding, there's no way to "reduce" entanglement unless you have a source of particles with "less" entanglement to interact with, right? Which I understand would not be the case in this universe, because everything would be fully entangled with everything else.
That seems to follow from determinism... if the universe is deterministic then everything must align exactly in such a way that if you simulated the universe it would be identical every time.
It's not just that you get repeated results, but that the repeated results will be correlated to non-local results. In ordinary determinism every interaction can be explained purely by local stuff; the further something is, the less it matters. In this case a very far object has an unlikely effect, which can be achieved only by constructing a Rube Goldberg path for it to get to a particular state.
In many ways, superdeterminism is a sort of intuition that most ‘physicalistic’ minds would come up with, as a thought experiment. This sort of ultimate inter-connectedness is but generalization of reductionism after all.
But speaking of this today, knowing what we know (more like what we don't), and the formidable extent of our reach into and grasp of the cosmos (none, in the big picture)... I don't know. It's like the Ancients talking of "atoms". Sure, they had the right idea... but from there it would take another 2,000 years to actually hone in on the concept — and consider that thousands of years of civilization and history already preceded these people. It's neither preposterous to think we're millennias away from some discoveries, nor yet to firmly believe we'll get there, eventually.
Science problems aren't solved quickly by a biological species... I don't know that we can speak of "failure" when it's just been one century of milder fundamental discoveries but a slew of practical applications (still unfolding) of the current paradigm.
It doesn't just take a problem to crack the next paradigm in science, it also requires the incentive, at a civilizational / societal level, to push through to it. It rarely if ever took less than a century, because it takes the biological maturing of adult human beings to actually crack it, and typically 2-3-4 generations more so than 1/2 in-between Twitter and HN. (I'm being funny, not sarcastic here — we're all human, so chill the doom-and-gloom talk, that's my point).
In a very real sense, I feel like superdeterminism is pure metaphysics as we speak; could maybe become science by the year 4,000, give or take 1,900... Oh History, you chaotic brat...
I love the idea though.
(-skip!- Comment over. Below is me rambling about personal ideas with little to no sci value, most likely; have fun at your own risk).
It's actually somehow embedded in how I explain dark matter halos and galaxy rotations, as all being but 1 heterogenous object whose 'kernel', a supermassive blackhole, has more than enough oomph to justify just about any phenomenon —literally breaks spacetime-gravity! I mean, isn't it obvious that "a weird¹ gravitational halo" and "just about the biggest gravitational object known to the cosmos" happen to come in perfect pairs cutely called "galaxies" (like we'd call boats mere "sails" or even "triangles" on the ocean because we can't see better, or is it figure of speech). For a cool image, a galaxy is like an egg, with a disc-like yolk and a black hole in the middle. Egg shell = DM.
It's cool to think about such topics. It must be profoundly inspiring to actually work on such ideas. I just think it requires this humble reverence for time. I don't know how real it is, but it sure means we can't do now nor ever before its time any future event in the series, A or B regardless. New physics may have to wait...
Now meanwhile, we was promised flying cars 30 years ago. Can we talk about how superdeterminism solves our problem? ¯\_(ツ)_/¯
--
[1] think about it: both DM and BH "break" light as we know it with every other object (photons either pass through or can't escape); the insane gravitational effects (can't imagine DM doesn't "feel" the black hole and reciprocally, these are not limited in distance...); the singularity of having such pairs of DM + BH in pretty much every galaxy, or should we say that every galaxy is "inside" such a combo; and yeah it helps that both are unsolved problems thus perhaps requiring "new physics" to solve. I feel like both problems are but one and the same and solved by an extension of GR/QM to some degree (not necessarily unification, GUT or ToE, that might come much later or never). Anyhow just random thoughts gathered along the years. Now I wish this were my actual line of work... :)
Free will is a terrible name for it, and arguably inherently contradictory, like "involuntary fasting": it's a synonym for an uncaused cause.
What's lost in the process of burning that strawman, is that we can't ever have perfect knowledge of our environment or our nervous systems; and in a seeming paradox of determinism, a being who believes they have agency will behave differently than one who does not. Even if one accepts the model that we merely rationalize our subconscious choices (an imperfect model with some non-zero truth), the data from the outcome of that choice still feed into one's neural network and influence future choices, making the "free will software stack" a potentially valuable psycho-technology.
As a practical matter, I'm intrigued by the model that what we experience as consciousness, and the feeling of agency, really only has one knob to turn: what tiny fraction of our massive neural information network we orient the spotlight of our attention towards. That singular "decision" point, navigating what Buddhist cognitive scientist John Vervaeke calls the "salience landscape", has incredible potency on its own, iterated moment after moment for the entirety of a human lifespan.
"the data from the outcome of that choice still feed into one's neural network and influence future choices"
It seems to me that any time you have a process that feeds the output into the input, then you can easily have a divergence of trajectory whose precision exceeds anything that could be explicitly stored in a finite universe.
So it seems to me that free will must exist in the sense that a simple system, let alone a brain, can produce information from somewhere that can't come from the limited universe we seem to inhabit. Like, take the Mandelbrot set as a simple proof of concept. It may not have all the things in it that are in the universe, but it seems to have more resolution, more components, than can exist in reality. So if you used it to feed a process, it can add something to the universe as it affects reality.
Superdeterminism requires a ludicrous amount of complexity. It's not a simpler theory, it's a vastly, vastly more complex one.
It implies that the universe is fine tuned to such a degree that everyone who's ever done an experiment which might disprove QM ever arrived at their n-sigma result did so by random chance.
The only way it could make even a lick of sense is if we're living in a simulation, and there's a little green man who's job it is to ensure we never disprove quantum mechanics.
The thing that makes superdeterminism hard to believe is the same thing makes time travel plots where destiny is fixed hard to believe. The level of coincidence and orchestration required to make things play out in just the right way, despite determined humans trying to make them come out another way, becomes ridiculous.
The angle settings in a Bell test can come from basically anywhere. For example, you can point telescopes at opposite sides of the universe, feed the frames into a cryptographic pseudo random number generator, and use the output to decide the angles. For superdeterminism to be true, that would mean the initial state of the universe had to be such that millions of years later billions of stars on opposite sides of the universe would be in just the right configuration to make the SHA1 hash of a picture of them come out in a way that correlated in a simple way with the polarization angle of the photons you were generating locally. And then they'd also correlate for the next one. And the next one. And the next trillion after that. And also it has to be just right for all the other ways you can set up the angle selection.
I won't say it's literally impossible to satisfy that many constraints. There's a lot of degrees of freedom to work with, and only finitely many constraints before heat death. But it seems really really contrived to me. Like explaining why someone has rolled five hundred natural 20s with "Oh that's just how things were always going to play out" instead of "that seems suspicious. Let me see that die."
Superdeterminism means that a hugely macro level decision like which measurement to take is tightly coupled with teeny tiny micro level outcomes. It would mean that the physical predictions are really really complicated. I'd say Occam's Razor points the opposite direction.
Satisfying Occam's Razor is not a necessity, it's more a rule of thumb.
Based on Occam's Razor alone, modern DNN based models are complete garbage because of the amount of complexity involved in their models, yet they still hold useful predictive power.
Sometimes we have to throw out these general rules of thumbs and test the waters outside when we hit walls we can't seem to get around.
Interest and pursuit in the philosophy of science is something too largely brushed aside that, IMHO, needs an adrenaline shot these days. Unfortunately, few want to pursue that work because it doesn't pay the bills.
Totally agree. I was just countering the parent comment that said "occam's razor points this way" with "nah it points the other way." I don't intend to say we necessarily should follow it in either direction.
Because free will is supported by some pretty overwhelming evidence. Every human who has ever lived has had the experience of making a free choice. That experience is every bit as real as the experience of, say, seeing the moon. If you're going to seriously entertain the possibility that there is no free will then you need to be ready to entertain with equal seriousness that the moon does not exist because both are supported by similar bodies of evidence.
What really seems a mirage to me is mistaking free will for unpredictability. That's like saying that your mind needs to be outside the physical world ("a soul") to make "real decisions", which is absurd.
Yes, it is outrageous to claim free will is an illusion. As you point out, it contradicts a huge volume of evidence (our perception). However, a significant proportion of physicists believe free will is an illusion. What possibly could have persuaded such a large number of Earth's greatest skeptics?
The mind consists of different parts. Imagine the act of choosing to eat a strawberry ice cream or a chocolate ice cream. You happen to like chocolate and dislike strawberries. If you could analyze the path electrons follow inside your brain, you could trace, even predict that you will choose chocolate ice cream.
Does it mean that you aren't free? I don't think so.
You might say that you aren't free to like chocolate and dislike strawberries. I'm not convinced either. There were foods I disliked (actually most of them) that I chose to get used to, and finally came to appreciate. So I can make myself freer.
For minor dayly things I may indulge in neurotic behaviours. But I can make most decisions rationally and usually do that when they're important enough for my life. As I grow older I have more control so I'm actually more predictable. Does it means that I'm less free now?
Sure, you are free, but that you is precisely those electrons in "your" brain, nothing more and nothing less. So all there really is is electrons flowing in predictable ways, predictably choosing chocolate or strawberry.
Have you ever written any papers or blog posts regarding your thoughts on free will? I'd love to dig deeper into your perspective. I had assumed you would consider free will to be an illusion.
I do consider free will to be an illusion! But making that argument is far from the slam-dunk that the OP implied, even in the face of quantum mechanics. I don't believe in superdeterminism either. The problem with SD is that it requires all of the information about all future events to reside somewhere now, and there is no known place in the universe where it could be hiding.
There's a lot more where that came from if you're interested.
P.S. I love your handle! I see you created this account for the sole purpose of responding this comment. How did you even find this thread? It has been downvoted into oblivion.
The moon does not exist in the exact same sense that free will does not exist. The moon is a concept that only exists in living minds. From an ultimate perspective there is no moon and there is no free will. This is a straightforward consequence of physicalism.
I think free will is fundamentally a bad term/model, independent of the truth value of what everyone means by that label. That said, I think you might be somewhat conflating free will and consciousness; it's perfectly cogent, for instance, that a provably deterministic robot might have subjective experience; and that after executing each pre-determined binary instruction, would be programmed to have a feeling of "I chose that instruction".
I do think those who have a bias for materialism are overly eager to discard fuzzier subjectivities like free will, consciousness, even religious experiences and mystical states. There is obviously a sense in which Harry Potter is not real; but there is also a sense in which Harry Potter is more real than an arbitrary rock or tree, because as an ephemeral memeplex, that being altered planetary culture and created billions of dollars of economic activity. Just because a phenomenon lacks affordances for empirical measurement does mean it is necessarily absent from reality altogether.
I've mentioned this in a couple different HN posts regarding QM recently and I'd like to mention this again. I'd greatly appreciate it if we could do some back and forth. I think the "current crisis in physics" has much to do with the foundations of QM. If what I present below is just false, then fine, I'd appreciate an explanation of why instead of a downvote.
In short, I believe what Planck found was not an action constant, but rather it was an energy constant. Planck's constant has the wrong units. The constant should be in Joules/oscillation as opposed to Joule*sec.
Planck inadvertently "hard coded" a one second measurement into his constant.
His equation should be E=htf where h is in Joules/oscillation t in seconds and f in oscillations/second. I think there is a reason why the units for frequency aren't explicitly listed as oscillations/sec. the units of E=hf don't balance if f has the units oscillations/second.
Can you find an error in her paper? Looking forward to hearing your thoughts! Thanks!
Upshot: Every oscillation of light has the same energy, and what we regard as a photon today is actually a one second measurement of oscillations of light at some frequency f.
Sure, just one you say? In Ch. 2 §1 Mortenson, MD tries to derive the "mean energy per oscillation".
She refers to the dimensionless scalar "N" as the number of "waves" the photon is comprised of.
Not only doesn't that make any sense whatsoever, since a photon can only have a single frequency, and is hence described by a single wave. She also introduces a "unit" that doesn't exist - namely the "osc" (=oscillation), which is DIMENSIONLESS, i.e. it's doesn't have a physical representation and therefore cannot be unit...
She then continues to fail to apply the most basic dimensional analysis and I basically stopped right there.
It's a bunch of hogwash and esoteric pseudo-science, sorry.
The current interpretation of E=hf is what suggests that a photon is the elementary particle of light, and that a photon can only have a single frequency so I dont think that is a fair critique. She is trying to point out that E=hf and the current interpretation is wrong and is not assuming it's true in her analysis.
> DIMENSIONLESS
Although this is an aside to your point, can I hear your thoughts on using the dimensionless fine structure constant in equations?
Following your reasoning are we not allowed to use radians (an SI unit) in equations either?
> She is trying to point out that E=hf and the current interpretation is wrong and is not assuming it's true in her analysis.
If you want to prove an equation wrong, you cannot use it as a starting point. Let me show you the fundamental error she makes in detail:
E=hf
Dimensional analysis: J=J * s * s^-1=J * s / s=J * 1=J
So the original checks out ok.
The paper tries to argue that Js is the wrong unit for h and replaces the "1" from above with bogus "oscillations", which are never defined. The proper inverse of frequency is not "oscillations", however, it's period (singular!), which is measured in seconds.
Dividing by "oscillations" gives you the same unit you started with and changes nothing. This is independent of your interpretation of what a photon is. Since she also never specifies the relationship between "oscillations" and wavelength, I wonder how E=hc/λ follows, let alone the de Broglie wavelength...
> can I hear your thoughts on using the dimensionless fine structure constant in equations?
Using the fine structure constant in equations is equivalent to using pi in equations. Proportionality factors exist in nature.
Planck's constant does have units of energy per oscillation.
Or to be exact, it has units of energy per frequency of oscillation,
e.g. Joules per Hertz (J / s^-1) = J s.
I'm not even going to try this time, I'm just going to say to everybody reading this: superdeterminism is not at all the same thing as determinism. It is a far stronger assumption with far far more unintuitive consequences for our understanding of nature. If you're reading this and just thinking "superdeterminism is okay because there's no free will", then you've been suckered by this article into believing a massive oversimplification.