Why does nobody seem to understand it is ALL about the model?
Before enrolling at Texas Tech University to earn my PhD in neuroscience, I wrote a magazine article abstract and sent it to Nature Neuroscience. They turned me away but suggested other publishers, of which I reached out to a few and received generally tepid responses.
The essay attempted to take GWT (global workspace theory) and splice it with IIT (information integration theory). I’m not sure I’m ready to say that doesn’t work, but in terms of explaining the experience of consciousness I am no longer quite sure it works. Integration and the workspace are both prerequisites for consciousness, but neither really explains what it is or why it exists. The model that they support and update is what does that. We should be focusing, hard, on reasoning out exactly how it is that our minds model the world around us, if we really want to know what consciousness is.
Information integration is a conscious-underpinning process that happens basically everywhere, more or less all the time. The IIT argument re: consciousness is that the places with the most information integration occurring are the most conscious, whatever that means in a world where it must compete with enactivism and lose. GWT instead postulates that consciousness happens mainly in a few places but is largely a global phenomenon.
The GWT hypothesis put forth by Bernard Baars seems most reasonable on the face of it, after all, consciousness is thought to be a high order process that in some cases struggles to effectively overcome baseline resistance at the behavioral level, i.e., addiction. But what is the global workspace? How can we make sense of this phenomenon, which seems to even be a fairly recent innovation in terms of concepts?
In this we must turn back to Tononi and IIT for an answer: the global workspace contains a working model of the world around the person whose head it mostly exists inside of, including experiences, predispositions, and all the other things a good life can offer. This model is continuously being updated, yes, in fact by integrating new pieces of information from the world around us.
I’ve basically decided to scrap the clumsy ad-hoc GWIIT moniker for this thesis in favor of the Worldview Ethics approach I first put forward last year. The project is likely to take on a significantly more scientific tone moving forward, but the good news from the front is that the thesis is alive and well following a six month literature review which actually included getting into a PhD program and asking real researchers about pieces of it.
It looks like we will have some studies coming soon, one of which I am hoping will combine language and emotion in an interesting new way.
Yann LeCun recently tweeted about the bandwidth requirements of vision and language and I believe we have a strong rebuttal to him. Just like with Baars and Tononi, LeCun isn't seeming to focus enough on the *model* for me.
Here’s the tweet:
Our easy response is this: okay, ser. You win - language is lower throughput. It also experiences higher latency. And yet, it is an *integrated* information stream which hence may convey orders of magnitude more information than visual nervous activity can convey.
The basic idea here is that our minds model our worlds and, rather than processing raw sensory data on the fly all the time, they actually create powerful simulations that we use as tools, maps if you will, to navigate the uncertainty of the world around us. And so the tremendous throughput of the visual system is sustained because it needs to be integrated into the cortical tissue and then broadcast globally to become integrated into the model, or, the worldview, if you like my word. We can say weltanschauung if we prefer German. Vision works by updating the model in response to prefiltered inputs that come through a massively high-throughput channel into the brain, which is why we have silly phenomena like the McGurk Effect to enjoy. In essence, giving the model two different streams of suggestive inputs can create unexpected consequences.
Regardless, the point is that this internal model is itself a virtual process that exhibits arbitrary characteristics relative to the neural machinery it runs on, except in the very special cases such as we see in the visual cortex where we actually can map neurons onto specific parts of a person’s visual field. Most neurophysiology is much less certain.
The special case here is a “first-order process” because of the 1-to-1 deterministic mapping structures we can easily make. But note, for ALL higher order processes, we are leaving the territory of the first-order processes - we are now at least combining multiple input sources and at most recreating sensory experiences that always have an unmappable virtual state component. What I mean by that is simply that presenting the subject of a psychology experiment the same stimulus results in less reaction over time, and the same neurons seldom activate predictably. The mapping from observable neural phenomena in higher order mental processes such as emotions back onto the neural substrate from which the emotion presumably at least in part arises from is never quite 1-to-1 or deterministic, which makes it very difficult to understand in the fMRI or EEG but nonetheless makes a good deal of sense with respect to what we're finding in the lab these days.
Perhaps, at the heart of it all, there lies a highest order process. The most abstract self-representation the organism in question is capable of producing, maybe. Perhaps there lie unconscious processes that somehow could be said to be higher order than consciousness itself, exhibiting cybernetic control relationships and restraining our conscious thinking.
The study of consciousness has my full attention, and I may be exhibiting massive hubris in this simple statement, but it’s a vibe: I think I can feel it yielding. At the very least, I'm enjoying my recent immersion in it to the very limit of my ability to enjoy things. Thanks everyone for reading and I will see you next week!!!