The Consensus Construct: unifying quantum, social and scientific realities
Author: Pavel Chvykov
Pavel got his PhD in theoretical physics from MIT, then venturing into Complexity Science and working in a broad range of interdisciplinary topics. Over the last 8 years mindfulness practices became his focus, and he is now building a research community to leverage Complexity Science tools in Contemplative Studies, and vice versa. If you're interested to connect around any of this, please reach out to him and/or follow his blog at https://www.pchvykov.com/blog
TL;DR: All three of these are centered around the idea of consensus reality – that reproducibility, or redundancy of consistent records, is what makes something "objectively true." Slight deviations from such consistency is what leads to non-classical effects in the case of Quantum Darwinism, creates opinion dynamics, conflicts, and evolution of thought in the case of society, but is generally avoided in the case of scientific method. In this essay I unpack these parallels, suggest the possibility of a unifying mathematical framework, and consider the consequences of admitting some non-reproducibility as a generalization of the scientific method.
For a few years now I’ve been thinking about parallels between three seemingly very different topics: Quantum Darwinism (a recently popular interpretation of QM = Quantum Mechanics), social constructs (like money, culture - interpersonal realities we create and live by), and the scientific method. I have not yet found a way to make these connections sufficiently rigorous develop a proper theory, and so in this essay I want to work towards that by clarifying these ideas, their connections, and point out possible implications.
Quantum Darwinism
We begin with a rough overview of Quantum Darwinism (QD) [Zurek, 2009, wiki]. The core issue in Quantum Foundations research, as I see it, is to understand how the simple dynamics described by the Schrodinger equation of QM can ultimately give rise to the complexity of the observed world around us (see Carroll, 2018, also Bohm's “implicate order”). In some sense, QM suggests that we might describe the state of the entire universe with a single vector in a very high-dimensional space (the “universe wavefunction”), while all the dynamics in the universe are described by this vector’s steady rotation. The messy complex universe we experience must then somehow arise from the fact that we ourselves (the “observers”) are part of this universe wavefunction, and so we don’t get to see its simplicity “from the outside,” but get some internal partial view of things. But how exactly this happens is still pretty confusing and subject of debate in Quantum Foundations research.
If we consider the observer to be a non-special part of this universe wavefunction, then we are lead to the many-worlds interpretation, where every possible scenario plays out in its own "branch" of the universe (at least mathematically). But how can we get branches, if all we have is a vector spinning around in some high-dimensional space? While I have not yet found an explanation that I find conclusively satisfying, quantum decoherence (and einselection, see Zurek, 2001, wiki) proposes an answer, which QD (Quantum Darwinism) further develops.
If we split the universe into our “quantum system” and “environment” (choosing this split is an important issue we will skip here, see Carroll, 2018 again), then the interaction between the two transfers information about the initial system state to the environment. So if Schrodinger’s cat starts in a superposition of dead and alive, then photons hitting it once the box opens will now carry information about its state through the universe – whether a human is there to observe them or not. Moreover, QD argues, new photons will keep hitting the cat, thus taking more and more copies of the information about its state across the universe. This way we can say that records of the cat’s state proliferate in the environment, with a high redundancy, thus allowing many different observers to independently extract (measure) these records and come to a consensus about the cat’s state. “Objective existence – hallmark of classicality – emerges from the quantum substrate as a consequence of redundancy” [Zurek. 2009].
The core point of QD is that only those system states that can produce many redundant records of themselves in the environment, will be the ones we observe as objective reality. So for the cat, a superposition of dead and alive will never be "objective" since it is not stable under interactions with photons – and so cannot be copied many times. So, for example, if one day you observe aliens, but no one else does, the aliens will not be considered “objectively real” for lack of redundant records. This way, we get a sort of “Darwinian selection” of states, such that only the “fittest” (most stable) system states can produce many records in the environment, and so become an observed reality in some branch of the many-worlds wavefunction.
Social constructs
This way, QD suggests that the key criterion to have an “objective reality” is consensus among the various system records in the environment – and ultimately, among the observers measuring those records. This seems to awoke the scientific method, where reproducibility of an experimental result is the core criterion for “objective truth” – we will explore this further below. But where we most commonly encounter such “consensus reality” is in our social constructs – such as money, nations, moral doctrines, etc. This parallel is further inspired by some exciting work that has recently been done in cognitive science, identifying the ways that quantum-like math may effectively model some behavioral experiment outcomes (see Quantum Cognition; crucially, note that this makes no claim about actual quantum effects in the brain!).
I find money to be the most fascinating and pertinent social construct in our daily lives. The worthless pieces of paper we pass around acquire real value when we have consensus in some group to treat them as valuable. Money gets this tremendous power, which can make the difference between life and death, simply by us agreeing to play the game. This became especially clear with the introduction of bitcoin. Similarly, money can easily go back to being worthless paper – as with hyperinflation in the face of national crises, where trust in the systems backing the money erodes. Yet, in our day-to-day lives, it is one of the most real and robust “objective realities” – in practice, often more “real” to us than birds, stars, global warming, or happiness.
And just as with QD, money only works if all observers agree on its value. Introducing even a few players who view it differently subverts the whole system – as money becomes a not-quite-universal medium of exchange, soon eroding general trust in it. In QD, two branches of many-worlds that have inconsistent records of the system’s state (cat is dead in one, alive in the other) are prohibited by laws of QM from ever interacting (if the two branches have diverged sufficiently). For social constructs, however, there is no law of nature that prevents two inconsistent social realities from colliding – and so we ourselves decide that they “should not” collide. When they do, we go to war to protect the consistency, and therefore the very existence, of our reality. We thus create the relevant laws, and then spend considerable resources to enforce them – using police, military, indoctrination, childhood conditioning, language, etc. Even on a personal level, we may maintain two inconsistent images of ourselves at work with our colleagues, versus at late-night parties – and work to ensure those “branches” of social reality never interact. Interestingly, by doing this we sacrifice having any “objective truth” of who we are, according to the above definition – which can lead to losing a “sense of self.”
Continuing the analogy with QD, only those social constructs that can accurately reproduce across the minds of many observers, creating a consistent understanding across culture, can become our social reality. Note that such reproduction depends not only on the construct’s value (truth value, usefulness, etc), but also on its appeal, how easy it is to understand, how well it’s presented, propensity for miscommunication, reputation of the originator, etc [c.f., memetics, and my other piece on "Values Darwinism"]. For example, we could imagine modeling ideas spreading on a social network, with some probability of mutation at each replication. Only those ideas that can “infect” large segment of the network, while remaining in a sufficiently consistent form across this segment, would be considered becoming a new social reality. This way, such ‘success’ of an idea could be undermined by low propensity for replication just as much as by high mutation (miscommunication) rate. Thus we would expect winning social constructs that we ultimately live by to be simple to communicate and understand, while also being maximally engaging or emotionally triggering.
I find it deeply exciting that both our physical reality, and our social reality, could both emerge according to similar principles – former by Quantum Darwinism, and latter by “Memetic Darwinism.” It seems to me that the core difference between the two is the propensity for collisions of inconsistent branches. In QD, such collisions follow precise laws, and can only happen when the inconsistencies are limited to microscopic systems, leading to quantum interference in a predictable, well-understood fashion. In contrast, inconsistent social realities can collide unpredictably, and on all scales, from a lie being discovered in private life, to power-struggle and censorship among global superpowers with incompatible value or belief systems. To some extent, such collisions gradually lead to finding some common-ground, which then solidifies into globally accepted “objective” reality. These realities can become so deeply rooted that we take them for granted, seeing them as fundamental laws of nature, and often cannot even conceive of an alternative way (e.g., few groups now entirely reject the use of money, and we often forget that its value is just a construct, and not intrinsic to the paper – e.g., imagine burning a $20 bill and see what you feel).
But it is equally important to acknowledge that as far as social realities go, collisions of inconsistent branches are entirely common, and often inform much of our worries and efforts in life. We therefore cannot disregard them and confine our social research only to consensus realities. In the QD analogy, that would be equivalent to ignoring all quantum effects and assuming a classical world. It also seems related to the equilibrium assumption in economics or finance – which grossly misrepresents reality, and has recently become a hot topic in these fields. This brings us to our final section.
The scientific method
There seems to be a deep relation between consensus realities and the reproducibility criterion of the scientific method. We may even argue that “objective truth” arising from scientific method is just a special case of such a social consensus reality – an observation becomes “objective” when we all agree on it. The discussion in the last section therefore makes us wonder whether the scientific method could be generalized beyond strict consensus.
The idea is easier to introduce on a social network, where agents make observations about each other (e.g., “Alice is kind” or “Bob has an accent” or “Eve has brown hair”). Any of N agents can “measure” (observe) any other agent, with the measurement outcome recorded on the edge A -> B between them – thus giving N^2 observation records. In the case of observing something like hair-color, we might expect that all edges connecting to Eve (x -> E) will agree that her hair is brown. In this special case, we can compress the information on the network by labeling just the node E with “brown” rather than all the N edges leading to it. This way, we can more efficiently say that Eve objectively has brown hair, rather than saying that everyone who looked at her saw her hair as brown. Note that while the former is more efficient, the latter is more accurate. Since our brain naturally looks for efficient representations (or compressions) of reality, we tend to think in terms of objects having properties, rather than only in terms of observations having outcomes (for all agents in the network, we would need only N properties, rather than N^2 observations). As such, it is important for us to know when such compressions are reliable – and the scientific method is just the tool to check this.
Now, if we instead have agents make N^2 observations about each other’s personality (“Alice is kind”), we do not expect such compression to typically be possible. We thus cannot have “objective” truths about agents’ personalities themselves, and must keep data on the edges rather than the nodes (“Bob thinks that Alice is kind”, "Eve thinks Alice is mean"). As such, no compression is possible, the scientific method cannot help us, and we must keep N^2 "subjective" records to accurately describe the system. I wonder if this may be the core of why the use of scientific method in psychology research has been so hard-going compared to physics.
Finally, consider the intermediate example: “Bob has an accent.” If Bob’s accent is British, then we would expect most Americans to agree with this statement, while most Britts to disagree (cf. social constructs section above). Thus, while this statement will not get universal consensus, there will be large clusters in the network, in principle allowing for some compression of the N^2 observation records. This creates a curious intermediate between objective properties of nodes and subjective observations on the edges. The scientific method would, in this case, miss the opportunity for compression, merely concluding that nothing objective can be said on the matter. (Note that you could, of course, modify the query to “Bob has a British accent,” which would lead to consensus and objective properties – but such a modification will generally be hard to find, and I think may not exist at all in some cases).
Thesis: We thus consider whether a generalization of the scientific method may be possible, which lets go of the goal of finding N objective properties of nodes, and instead looks for any possible compressed representation of the fundamental network of N^2 measurement records.
It seems to me that such a generalization may be more conducive to productive research in psychology and sociology, since fully objective properties are hard to come by in those contexts. Furthermore, if this lack of objectivity can be framed in terms of collisions of inconsistent social realities, then we have both, a setup for why this happens, and a tool for how to study it where subjective realities are better modeled and acknowledged. Moreover, continuing the parallel with QD, this network approach may similarly help clarify some of the issues of quantum foundations – by dropping our attachment to an ontology of objects having objective properties in the first place (see more below).
One simple example application in economics may be to consider a barter market, with N goods, and N^2 pair-wise exchange rates between them. If those N^2 rates meet some very specific (equilibrium) conditions, then we may compress the market representation and describe it with just N “prices” of goods, thus defining some universal medium of exchange or “money.” Real markets, on the other hand, are never at equilibrium and this compression is never exactly possible (since arbitrage exists). Appreciating the fundamental network structure, we could study the errors coming from such imperfect compression, or perhaps even look for other compressions algorithms altogether. For example perhaps we could define two currencies rather than just one, with possibility of arbitrage between them. Such alternate compression schemes may lead to better economic models, or governance systems that better reflect the economic reality. While similar math is used in finance, something about this ontological shift from properties to relations seems useful and novel to me.
To further relate to QM, we must first generalize from a network of agents that can observe each other, to a network of N particles, with observations being replaced by interactions. This way our ontology is shifted from properties of particles, to outcomes of interactions – including, but not limited to, the interaction of a human observer with a quantum system [this attitude is formalized in Relational QM – Rovelli, 1994, Rovelli, 2021, wiki]. While Relational QM (RQM) is considered a different interpretation of QM from Quantum Darwinism, I find the two to be taking different perspectives on the same idea – and really helping to clarify one another. While RQM focuses on shifting fundamental ontology from nodes (particle properties) onto edges (interaction records), QD then specifies the condition these edges (records) must satisfy to give rise to objective reality (i.e., the consensus needed to recover particle properties). This condition, which in QD is framed as proliferation of consistent measurement records, is like the network equilibrium property described above – i.e., going around any loop in our network of interactions produces consistent records of the system state, no "arbitrage" is possible, and the network can be compressed down to N "objective" particle properties.
Conclusions
Ultimately, the question is whether such generalization of the scientific method is actually useful. It seems nice that it could clarify some confusions around ontological realism in QM – but we wouldn't expect it to make any falsifiable predictions that go beyond standard QM. Game theory might give a better approach to quantify how useful this generalization is - in analogy to how the Dutch book argument is used to justify Bayesian probabilities. That is, we could construct some game where an agent that leverages relational ontology (measurement records on edges), or partial compressions, would beat one that only considers objective node properties. In some sense, the barter market example above suggests that this should be possible, and should be related to arbitrage methods.
To conclude, it seems to me that bringing these three topics together (Quantum Darwinism, social constructs, scientific method) in one mathematical framework would lead to a beautiful theory that could clarify some issues around Quantum Foundations, give a qualitatively new way to understand society and the social realities we live by, and possibly even allow us to generalize the conventional way we think of "objective truth" via the scientific method. With the breadth of possible scope here, I've had a hard time finding the best concrete place to start developing this – and so would be excited for any suggestions, ideas, or if you want to collaborate on something within this scope. Feel free to reach out to me on my website with any comments and subscribe to my blog if you like my thoughts.
Philosophers without number have tried to reconcile "all is illusion" Parmenides with "I refute it thus" Berkeley. Many if not most philosophers have been suckered by the supreme arrogance of Descartes' "cogito ergo sum," which doesn't close the gap but only accentuates it. If you can achieve a synthesis, good luck.