Neuroinformatics 4 seminar, session II
Session I: Global Workspace Theory. I held the first presentation, which covered Global Workspace Theory as explained by Baars (2002, 2004). You can read about it in those papers, but the general idea of GWT is that that which we experience as conscious thought is actually information that's being processed in a "global workspace", through which various parts of the brain communicate with each other.
Suppose that you see in front of you a delicious pie. Some image-processing system in your brain takes that information, processes it, and sends that information to the global workspace. Now some attentional system or something somehow (insert energetic waving of hands) decides whether that stimulus is something that you should become consciously aware of. If it is, then that stimulus becomes the active content of the global workspace, and information about it is broadcast to all the other systems that are connected to the global workspace. Our conscious thoughts are that information which is represented in the global workspace.
There exists some very nice experimental work which supports this theory. For instance, Dehaene (2001) showed experimental subjects various words for a very short while (29 milliseconds each). Then, for the next 71 milliseconds, the subjects either saw a blank screen (the "visible" condition) or a geometric shape (the "masking" condition). Previous research had shown that in such an experiment, the subjects will report seeing the "visible" words and can remember what they said, while they will fail to notice the "masked" words. That was also the case here. In addition, fMRI scans seemed to show that the "visible" words caused considerably wider activation in the brain than the "masked" words, which mainly just produced minor activation in area relating to visual processing. The GWT interpretation of these results would be that the "visible" words made their way to the global workspace and activated it. For the "masked" words there was no time for that to happen, since the sight of the masking shape "overwrote" the contents of the visual system before the sight of the word had had the time to activate the global workspace.
That's all fine and good, but Baars's papers were rather vague on a number of details, like "how is this implemented in practice"? If information is represented in the global workspace, what does that actually mean? Is there a single representation of the concept of a pie in the global workspace, which all the systems manipulate together? Or is information in the global workspace copied to all of the systems, so that they are all manipulating their own local copies and somehow synchronizing their changes through the global workspace? How can an abstract concept like "pie" be represented in such a way that systems as diverse as those for visual processing, motor control, memory, and the generation of speech (say) all understand it?
Session II: Global Neuronal Workspace. Today's presentation attempted to be a little more specific. Dehaene (2011) discusses the Global Neuronal Workspace model, based on Baars's Global Workspace model.
The main thing that I got out of today's presentation was that the brain is the idea of the brain being divisible into two parts. The processing network is a network of tightly integrated, specialized processing units that mostly carry out non-conscious computation. For instance, early processing stages of the visual system, carrying out things like edge detection, would be a part of the processing network. The "processors" of the processing network typically have "highly specific local or medium range connections" - in other words, the processors in a specific region mostly talk with their close neighbors and nobody else.
The various parts of the processing network are connected by the Global Neuronal Workspace, a set of cortical neurons with long-range axons. The impression I got was this is something akin to a set of highways between cities, or different branches of a post office. Or planets (processing network areas) joined together by a network of Hyperpulse Generators (the Global Neuronal Workspace). You get the idea. I believe that it's some sort of a small world network.
Note that contrary to intuition and folk psychology (but consistently with the hierarchical consciousness hypothesis), this means that there is no single brain center where conscious information is gathered and combined. Instead, as the paper states, there is "a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state". Which basically means that consciousness is created by various parts of the brain interacting and exchanging information with each other.
Another claim of GNW is that sensory information is basically processed in a two-stage manner. First, a sensory stimulus causes activation in the sensory regions and begins climbing up the processor hierarchy. Eventually it reaches a stage where it may somehow be selected to be consciously represented, with the criteria being "its adequacy to current goals and attention state" (more waving of hands). If it does, it becomes represented in the GNW. It "is amplified in a top-down manner and becomes maintained by sustained activity of a fraction of GNW neurons": this might re-activate the stimulus signal in the sensory regions, where its activation might have already been declining. Something akin to this model has apparently been verified in a number of computer simulations and brain imaging studies.
Which sounds interesting and promising, though this still leaves a number of questions unclear. For instance, the paper claims that only one thing at a time can be represented in the GNW. But apparently the thing that gets represented in the GNW is partially selected by conscious attention, and the paper that I previously posted about placed the attentional network in the prefrontal cortex (i.e. not in the entire brain). So doesn't the content in the sensory regions then need to first be delivered to the attentional networks (via the GNW) so that the attentional networks can decide whether that content should be put into the GNW? Either there's something wrong with this model, or I'm not understanding it correctly. I should probably dig into the references. And again, there's the question of just what kind of information is actually put into the GNW in such a manner that all of the different parts of the brain can understand it.
(Yes, I realize that my confusion may seem incongruent with the fact that I just co-authored a paper where we said that we "already have a fairly good understanding on how the cerebral cortex processes information and gives rise to the attentional processes underlying consciousness". My co-author's words, not mine: he was the neuroscience expert on that paper. I should probably ask him when I get the chance.)
- 0 comments