Introduction
As a result of the circulation of a recent piece of my writing, “Determined, or Not?”, in which I reviewed the work of Dr. Robert Sapolsky in his recent book Determined, a few questions have appeared. The most substantial of these is what the concept of a virtual layer atop the neurobiological machinery of the brain adds to the discussion of consciousness. Another question is the role of thermodynamics and/or entropy in consciousness. It is somewhat odd to note that the literature has avoided taking the seemingly obvious route of using the mind’s use of abstractions to deduce that thinking must contain an arbitrary component, and from there to conclude that consciousness involves a virtual process which runs atop the various biological structures Sapolsky correctly describes as deterministic. This path seems to rapidly lead us to a solution to the problem of the neurobiology of conscious thinking, so this short paper will aim to expound upon the concept briefly as a supplement to the original paper. The general idea is that this virtual layer is a useful tool to move beyond an understanding of what seems to be happening toward an investigation of why these processes unfold in the way they do. Hence, a robust model here, should one be developed, would be able to make significantly more detailed and accurate predictions about the internal experience itself.
Remarkably, we do find significant areas of agreement between aspects of current empirical research into consciousness and the study of interpersonal relationships. This is reasonable because evolutionary processes tend to conserve processes that are capable of preserving life; that is, there is an evolutionary perspective available to us in the understanding of a virtual inner reality which is not causally determined by the operation of the neurobiology we can study in the lab today. If conscious thinking was merely the accumulation of inputs, there would be no need to develop intense inner experiences to accompany the individual’s body as it traversed the world; behaviorism would be true. For this reason, we can expect to eventually find a theory of consciousness that avoids contradicting empirical evidence, if we are patient enough, even if consciousness does not turn out to be a predominantly mechanistic phenomenon.
There are 2 real problems in consciousness—one is related to rational thought, and the other has more to do with irrationality. Namely, these two clearly observable characteristics are simultaneously present in all human minds, but most theories of mind have room for only one of them. A nontrivial part of the difficulty of the modern project to explain consciousness is the ability of conscious systems to get things wrong, as well as the ability to get them right. To the extent that a quantum computer outputs a space containing all possible theorems within a given problem-space and then collapses around the desired result, it is a suitable analogy to conscious thought. Digital computers are suitable insofar as the operations the processor executes can be arbitrary with respect to the experience of the user. We will use computers as an analogy to get to the bottom of this concept of a virtual layer of conscious experience.
Van Eyck Phreaking vs. Neuroimaging Techniques
Let us begin by comparing the conscious mind and its observer to a similar case that appears in Neal Stephenson’s classic work Cryptonomicon, in which one of the hackers implements the method known as Van Eyck phreaking to spy on the contents of another’s laptop. The man in the room with his computer, who thinks he is alone, is quite similar to the mind in the body in many ways. There is a display, and there is a user. However, unlike the situation in the story, the scientist using advanced technology to attempt to get a peek into the brain does not have similar access. Understanding why can illuminate the problem of consciousness for us in perhaps the clearest terms yet.
Van Eyck phreaking is a process in which a computer system can be compromised from the outside by intelligent analysis of radiation by an external party. In a nutshell, it is possible to recreate the images from a given computer screen on a different computer screen without its user knowing. The important concept here is that electromagnetic radiation can be used to discern an internal process in a computer’s user interface and replicate it to a different user interface, which offers analogy to neuroscientific studies that use fMRI and EEG among other techniques to detect the functionality of the brain in action using the electromagnetic field generated by neural activity. The problem is that we don’t know what the user interface is really like, when we look into people’s brains. To some extent, this is a consequence of the fact that each of us is a different body with a different perspective to which concepts from other minds cannot always be analogized. To some extent the issue is that a given input does not always yield any particular output at all; indeed the levels of neural circuitry that inputs need to percolate through to reach consciousness change the raw sensory data our eyes and ears pick up to a point where the raw data itself would probably be completely unrecognizable when compared to lived experience in actual minds.
More precisely, Van Eyck phreaking is possible primarily because we know in detail how computer screens work and mostly we are able to interpret the intentions of a human being operating a computer by observing their actions in the context of the user interface. In consciousness, we lack this navigational advantage and, instead of knowing the technical specifications for the screen we need to use to view the process we are observing on its own terms, we are flying blind. In a recent paper co-authored by Yoshua Bengio, scientific theories of consciousness are detailed from the standpoint of an investigation into whether or not artificially intelligent systems today are in fact conscious, (Butlin, 2023). The high level discussion of scientific theories of consciousness presented in this review is of great quality, and worth noting is the fact that none of the theories presented offers a concept analogous to the screen. It is also unfortunately worth noting that, if we begin by assuming the truth of computational functionalism, we essentially weed out all of the potentially interesting places to search for insight into consciousness.
Leading neuroscientist Antonio Damasio, on the other hand, frequently refers to a “screen” in his books (Damasio, 2018). And it is not difficult to see what he means - each of us is a virtual self-process created by the central nervous system of the particular human body which gives rise to us, and we are all so deeply embodied that it is difficult to understand the extent to which our higher-order thinking relates back to the basic metabolic requirements that gave rise to our conscious selves at the level of our self-awareness. For Damasio, the screen is the basic level of conscious experience we have. It is visual, but not only visual. Psychologists study it by putting people through sleep deprivation, exposing test subjects to stimuli and observing the attenuation of conscious awareness of these stimuli over time. When sleep deprivation takes place, we see less activation in the prefrontal cortex where abstract thinking is thought to occur and more activity in the brainstem, on the older, lower layers of the brain.
It is possible to study someone’s conscious process and, if that person’s brain has been hurt, observe problems ranging from general indeterminism to lack of awareness to misappropriation of intent. These basic ways in which brains break down can yield valuable insights into the neurological system our consciousness is enmeshed in, but it is important to remember that consciousness is not a solved math problem yet! There is much we do not know and, unfortunately for those who would prefer the simple “it’s all just deterministic” view, there is really no basis upon which to make that claim, despite there being a good deal of the world which is deterministic. It could well be the case that hunting for these functional deterministic neurological circuits that map onto phenomena that can be verified by action in the world is what gives us the instinct for patterns we have, (Daniel, 2023; Lent, 2017).
The screen of consciousness is our inward-facing model of the world. The difficulty inherent in seeing the screen of consciousness is that we have to use the screen itself in our attempts to understand it, but nonetheless Damasio puts the effort in and makes the case that what starts as a low-level impulse to ensure continued metabolic activity becomes increasingly complex as additional metabolic units are added to it, until it gives rise to an entirely virtual process that bears limited relation to the external reality it represents. Insofar as we are conscious of the world around us, we are able to do as we like in the world within its limits. However, in conditions of unhealth and disease, as well as in aging, we find that our inner model of the world no longer functions with the same robustness it did during health and youth; we struggle to remember, to learn, and to cause the changes we desire in the world around us. To say that consciousness is determinate is to say that it is healthy, that it achieves the end at which it aims. In many cases, it regrettably does not.
Entropy is related to determinism in consciousness insofar as high-entropy conditions are painful to us. What happens is this: a high-entropy relationship between one’s internal worldview and external environment produces a condition in which it is very difficult to impact the world by making a conscious decision to do something and then doing that action. Either the action will prove impossible to do, or the result of the action will be unexpected. In both cases, neurons that predictively model the environment to facilitate deliberation, primarily mediated by the neurotransmitter dopamine, will register that the prediction was bad and their signal will be attenuated, which is a painful thing to experience. In this sense, entropic conditions are bad for our bodies - the presence of excessive entropy over evolutionary history has always correlated to difficulty in sustaining metabolic activity, and we are thus wired to do absolutely anything we can to avoid this state of affairs.
Virtual Conscious Processes
Activities that we are consciously aware of, such as my writing this sentence or your reading it, are a very small proportion of what happens in our minds and bodies. However, the arbitrariness of possible choices of words or pronunciations immediately reveals an author’s background or a reader’s method of building vocabulary. There are multiple ways to communicate a message, and to learn which is best in a given circumstance, based upon experience with similar circumstances in the past. Repetition of this process gives shape to a strategy to benefit maximally in the future. In some sense, the word-choice I use to communicate these concepts could be accomplished in a different way, and this is why I label problems such as the specific information content of a message as arbitrary. We can expect there to be a substantial level of causal invariance involved with speech acts as a result, but if we stray too far from the sort of message our audience is able to understand, communication fails.
This process of communication gives rise to a sense in which the past determines the future, but the endless idiosyncrasies observable across the variety of individual strategies in different instances is evidence that deliberation and choice are causal drivers of outcomes. There is a lot of wiggle room in consciousness too, as the play of young conscious animals reveals. Even random actions are observed by consciousness, and as the limits of one’s world are tested and revealed, they are remembered, so that they can in turn participate in driving the causal choicemaking of the future.
The FWD (free will/determinism dialectic) misses the mark because it fails to take into account the possibility that there is a physical basis for a process that, in itself, cannot be deterministic because its content is arbitrary with respect to the biological basis it exists atop. Synapses form on the basis of genetics, but the particular interaction of the genes in question with an environment in a particular body in a particular place and at a particular time - this process is so complex that it virtually never repeats, and even when we humans manage to explain parts of it, there are undesirable errors in our predictive modeling. Consciousness is mediated by biological processes, but one of these processes is agency, which at the very least must not be materially determined. As agency expands, the underlying biology can change in surprising ways. A new synapse can form, or degrade. An increase of metabolic activity can initiate, continue, or subside. The field of possibilities is as yet so massive that we, the passive observer in the laboratory, can make little sense at all of what we observe in the neural machinery, oscillating as it does between different states over periods of time, conceivably in response to something, but far beyond our ability to guess what that thing is.
Virtual conscious processes are processes which we experience directly, and which are related to the information received from the brain’s immediate internal and external input-driven feedback loops, but the virtual layer’s interactions with the physical substrate that gives rise to them have an arbitrary component as well. Walking to the top of a grassy hill to put down a blanket and enjoy a book in the afternoon sun can contain an almost endless variety of internal contents both mediated by the choice of literature and the frame of mind of the reader, though the sensory data is rather fixed: a breeze, perhaps, the sunlight, the grass, and the fixed content of the book. Minds navigate this possibility space by serving as filters, just as Huxley had it, but science that attempts to be more objective about what is possible across all minds has fallen short of the mark by limiting its focus to smaller and more simplistic instances of mental activity, (Huxley, 1963). Studying brains at their most basic gives us little information about the wealth of experiences they can give rise to, perhaps the thing to do is to leap into the deep end of the pool instead and ask our hard questions of the neurobiology.
To account for the range of experiences that can be had in similar circumstances is to embrace the extent to which consciousness is able to escape from the more basic aspects of its immediate surroundings by being arbitrary with respect to both: exogenous stimuli and corresponding neural activation patterns. Only a virtual consciousness process could explain this hole in the FWD logic where determinism predicts a system that runs repeatably. That is, nothing about conscious experience is repeatable, and neurons are changed by their firing such that we could easily follow Heraclitus to field a claim that ‘no neuron ever fires the same way twice’. The philosophical position of free will insists that the system is nonetheless capable of choosing and producing action to follow and evaluate the choice against, which flies in the face of laboratory evidence that damaging the neurobiological machinery can change the sorts of thoughts a person has and the sorts of actions they can take. Neither pole of the FWD dialectic (i.e., Hard Determinism or Free Will) is the whole truth, although both contain a measure of relevant information about what we observe when we pay attention.
The Function of Cognitive Arbitrariness to Metabolism, at Evolutionary Scale
Arbitrariness serves metabolic needs by enabling a wide range of experimentations to occur, always ensuring that valuable insights may be preserved while allowing the mediocre moments to be forgotten. It is this aspect of cognition which tries everything and encourages only the successful processes to continue. From countless such experiments we may witness the birth and evolution of cultural phenomena, but it is still quite difficult to model these in the lab in even just one individual brain. Further, it is not yet clear exactly how this relates back to the metabolic level of existence which gives rise to all of these processes. This final section of these notes will attempt to provide a basic account of choicemaking that emerges in contemporary cognitive science.
The unfortunate aspect of Butlin’s review (Butlin, 2023) is that the authors allow computational functionalism to limit the scope of their investigation of consciousness. A far more revealing investigation into the nature of conscious thought is possible if computational functionalism is discarded, and arguably the review leaves the most valuable insights for the creation of artificially conscious machines on the table by refusing to investigate IIT or cognitivist embodiment/enactivism as perspectives into consciousness. To first understand what consciousness is and how it works, and then, only once human and animal consciousness is understood, to attempt to build an analogy to produce the machine equivalent of this phenomenon is perhaps a more efficient route through the problem space.
One completely irrelevant consideration here is the philosophical quandary of whether or not the content of consciousness is materially determined before it happens. Probably actions are more determined insofar as conscious thinking is simpler, such as the sort of thinking that happens when you’ve been up for 48 hours without sleep and your brainstem is the primary driver of your thoughts. Actions are probably less determined insofar as the creation, availability, and comprehension of increasingly abstract mental objects enables an increasing level of arbitrariness between mental objects, experienced phenomena, and neurobiological foundations, but all conscious systems likely have some level of inner experience.
It is the aggregate will of the 37 trillion cells of the human body and all of its constituent microbial allies that produces the machinery which allows a virtual self to process inputs from each member of the collective and make choices on a level of abstraction that is far beyond the comprehension of any particular individual component part. To argue that this process is deterministic is to under-appreciate the capacity it has to enact its will upon its environment, and to claim that this process can be recreated in silica is likely a dramatic underappreciation of the metabolic will-to-live expressed in each individual microorganism’s metabolic relationship with its environment and neighbors. After all, if consciousness is predominantly a mode of engagement that allows bodies to strengthen the guarantee of oxygen & nutrients to each cell in a body, the thought that a machine could be conscious seems to ring hollow - what is it conscious of? What does it care about? Perhaps something, but what that thing could be is far from obvious. This makes it quite difficult to predict just how present AI systems will manage to learn to reason like conscious minds do, as Melanie Mitchell, a top notch cognitive scientist with many decades’ experience in the AI research field, observes, (Mitchell, 2023).
Present efforts to reverse engineer the most successful large language models seem to reveal that there is some level of determinism available to them, but since these systems lack a will-to-live, it is impossible to argue that they are self-driven. The function of a self is to sort abstractions on at least a semi-arbitrary basis and determine which are worthy of attention, and although LLMs do produce some arbitrariness, the characteristics of hallucinations seem to support the view that this is noise and not signal, (Elhage, 2021). The difference is that in a human brain, there is a metabolism to serve as a grounding rod to decide what is good and what isn’t; the computer systems lack this valence component and to them, as a result, all of the data looks the same.
In conclusion, it is important to note that there is much we do not yet understand about consciousness. The best approximation of a comprehensible mode of conscious thinking is reliant upon the relation of arbitrary events to conscious inner experience, a phenomenon which provides simply no evidence with respect to the question of free will vs. determinism and simply obfuscates these discussions by increasing complexity to a level at which they become unwieldy.
Concluding Thoughts
We know that microprocessors must provide a basic level of deterministic functionality to be useful in running computer programs, but we also know that the choice of which program to run at a particular time in a particular silicon chip’s life cycle needs to contain arbitrariness if it is to suit the computational needs of a real user. The evidence cognitive science has provided thus far supports a view of emerging consciousness in a biological environment as a coordinated entropy-reduction effort that re-creates its more successful efforts to contain entropy in this way via genetics over successive generations of biological bodies. The view that emerges, if we accept both of the preceding statements, is one of a system that builds successive layers of deterministic processes within itself to continually increase stability and resiliency for indefinite propagation despite a high-entropy external environment.
Within this unvarnished paradigm, there are myriad inputs into one conscious “screen,” enabled by the myriad components which each provide information to this global process. Neural substrates are intriguing, complex, and revelatory in this investigation, but nonetheless the processes they give rise to must be further analyzed and decoded to reveal the specific nature of consciousness over time. It is possible that a large proportion of the underlying processes are deterministically chaotic or determined outright, and that the stream of conscious images is driven by them in large part, but alignment within a particular mind seems to require both specialized processes tailor-made to suit the environment at hand and also accurate models that contain the information needed to continuously scale the complexity of the conscious thought process to the subsequent level. Evidence for this is supplied directly to each of us by the occasional discontinuities of conscious experience, but it can also be observed in the lab via optical illusion and other sleight-of-hand in which the mental model is tricked into reporting a sensory datum that is not an accurate report of the state of things external to the body.
About the virtual component of consciousness, then, what may be said is in a way quite vague and yet in another way quite solid. The physiological underpinnings of this virtual process can be mapped in exquisite detail without revealing what we really want to understand, the conscious unfolding of life that each of us experiences. It is possible that the openness of this process of integration of experiences in its creative, arbitrary, error-prone, novelty-seeking way will always defy attempts at reductive or mechanistic explanation, but even so it is possible to draw general conclusions about how to improve it.
References
- Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., ... & VanRullen, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708.
- Damasio, Antonio, 2018. The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York: Pantheon.
- Daniel, Thomas Dylan, 2023. “What is Cognitive History?” https://app.t2.world/article/clmnxx5oq5701171fmcqyrv7vjd
- Huxley, Aldous. (1963). The doors of perception: and Heaven and hell. New York : Harper & Row.
- Lent, J. (2017). The patterning instinct: a cultural history of humanity's search for meaning . Prometheus Books.
- Melanie Mitchell, AI’s challenge of understanding the world. Science 382, eadm8175 (2023). DOI:10.1126/science.adm8175
- Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformer-circuits.pub/2021/framework/index.html.
*Thanks for reading this piece! It is part of a broader investigation into the literature around consciousness and artificial intelligence. Find the index of these works at https://worldviewethics.cent.co/ .*