Honderich's recent monograph is invaluable even before he gets around to announcing his own theory of actual consciousness by virtue of demonstrating one pathway through the discipline's inherited theoretical thickets. Honderich departs intriguingly from standard metaphysics of consciousness in his own solution to the sifted problems he thinks any adequate theory of consciousness must adequately address. Beginning with perceptual consciousness in his triune distribution, Honderich presents a novel way of thinking about a passing momentary consciousness of sensory experience.
I visually admire a ripe apple before me on the table. I grip and bite into the apple, and I am perceptually conscious of its taste, as I may also be of its color, size, and shape, and surface generally. These experiences of the apple for Honderich do not presuppose a thought-perceptual object relation whereby the actual world is sharply distinguished from its representations in streaming successive moments of consciousness.
Rather, Honderich proposes an analysis by which there are multiple actual worlds, all of them physical. Actuality is in particular each thinking subject's subjective physical world. The subjective physical worlds in which each of us lives are like separate apartments to which no one else is admitted. If Honderich is right, then they are also exactly so many actualities. I am not sure that I fully grasp Honderich's distinction between objective and subjective physical reallity that is key to understanding his new theory of consciousness. The category of the subjectively physical.
Subjective physical worlds are not separate from consciousness. We have no reason to think, although we have not yet considered cognitive and affective consciousness, that they do not stand in lawful or dependency relations with it. Also subjective physical worlds are identical with and include facts of consciousness. As you will guess, we are here at part of the centre or gravamen of the actualism theory of consciousness.
Perceptual consciousness, already characterized as physical, is also in the given way or sense subjective. Subjective physical worlds, further, unlike the objective physical world, are almost always a matter of the consciousness of one particular individual perceiver.
Actual Consciousness // Reviews // Notre Dame Philosophical Reviews // University of Notre Dame
To the extent that I understand the concept, each of us lives, functions or operates within his or her own subjective physical world. There is, apparently for decorum's sake, one objective physical world, but then as many subjective physical worlds as there are perceiving subjects, each of which along with the subjective moments of consciousness it contains is actual. Subjective physical worlds are not mere tablet-stylus imagistic representations of the objective physical world in causal partial sync with its ongoing events, but physical worlds themselves in their own ontic-metaphysical right.
They are for each of us the physical world of perception-plus affect and cognition hence the subjectivity and action hence the actuality. The exact ontology of this remarkable relation is mentioned but not further explained by Honderich, as though in light of criticisms of other theories of consciousness it were the only or best explanation. Which it could be, although I did not see the argument for that proposition in Honderich's book. Honderich does not spell out an exact inference, with all its assumptions basking in the sun, that would allow us to pocket the superiority of positing a single objective and multiple subjective physical worlds ontology in order to explain the nature of perceptual consciousness.
Actual consciousness as the physical world of each subject's subjective individual consciousness is not a mere approximate representation of an external mind-independent objective physical world. It is a world in and of itself, containing the subjective presentations of dynamic things in which we live and of which we are conscious or of which at least our perceptual consciousness consists, and with which in that space we interact with other things, including socially with other persons. It remains unclear to me in particular despite my desire to be sympathetic what would justify postulating a singleton objective world and plethora of subjective physical worlds.
Why could Honderich not make all the same essential points by holding that there is one physical world that presents as many aspects of itself subjectively as there are different perceiving subjects? How is understanding of consciousness gained by speaking of distinct worlds? Is it to powerfully emphasize the subjectivity of consciousness and interimpenetrability of the conscious states of different conscious subjects? It is not clear that we must resort to worlds for that modest conclusion.
There is a theoretical downside also to accepting multiple subjective physical worlds in the metaphysics of consciousness.
- Ultra-precision Machining Technologies: Selected, Peer Reviewed Papers from the 8th CHINA-JAPAN International Conference on Ultra-Precision Machining, (CJUMP2011) [sic], November 20-22, 2011, Hangzhou, P.R. China;
- Article excerpt;
- Computer and Robot Consciousness!
- If the conscious self is an illusion – who is it that's being fooled?.
- Attention Deficit Hyperactivity Disorder: From Genes to Patients (Contemporary Clinical Neuroscience).
- The Brand Challenge: Adapting Branding to Sectorial Imperatives.
- Subscriber Login.
What is actual for one subject is not the actual subjective physical world of any other subject. If actuality is as Honderich maintains being subjectively physical, how is it possible for science to address itself methodologically to a common actuality, a common actual physical world? The objective physical world exists for Honderich almost in neo-Kantian P. Strawsonian style, independently of actual existence, and identified instead with an immense succession of distinct subjective physical worlds.
The nagging problem here, I suspect, is working out the relation between the objective physical world and the actualities of all conscious subjects living in their respective subjective physical worlds. If a subjective physical world is the world that each of us inhabits, where our cares and intentions are located, why suppose that there is besides these also an objective physical world? Certainly we have no direct perceptual access to it. Perception takes us no further than subjective physical actuality. For this reason we cannot compare the contents of moments of conscious perception with an external reality as its mental representations.
We are not thinking of affective consciousness, leaving in Honderich's category scheme only cognitive consciousness. For a philosopher to be conscious that there is an objective physical world in addition to the philosopher's occupied subjective physical world requires accepting an abstract argument to that effect. Would it not be excluded on these grounds by Ockham's Razor? Kantian noumenal reality, even of a Strawson-inspired kind, does not offer contemporary empirical science objectivity in the sense it needs and expects. Appealing to multiple subjective physical worlds, multiple actualities, rather than a mind-independent singleton actual world, is unlikely to be greeted by many theorists as doing the natural sciences much of a metaphysical or epistemological favor.
One suspects that Honderich's metaphysics faces an uphill climb to find favor with rigorously experimental neurophysiogical and psychological science. Honderich rightly emphasizes the intentionality of representation. He finds the intentionality of consciousness more developed philosophically than discussions of qualia. He staunchly disappoints the recent wave of so-called representational theories of consciousness that try to offer unexplicated representation as an alternative to theories emphasizing the intentionality or aboutness of conscious thoughts. Abstract one-one mappings of things and their parts can always be supposed to exist, but, lacking an intrinsic intentionality by which this object in the mapping network symbolizes its corresponding object, they are not yet representations of anything.
That Honderich's discussion of actual consciousness opens so many avenues for philosophical exploration is the measure of its success and likely long-lasting contribution to the study and understanding of consciousness. To test necessity, one would eliminate a certain neural state and demonstrate that consciousness is abolished. Notice that such tests go beyond mere correlation between neural states and conscious states see section 1. In many experimental contexts, the underlying idea is causal necessity and sufficiency.
Whichever option holds for S , the first step is to find N , a neural correlate of consciousness section 1. In what follows, to explain generic consciousness, various global properties of neural systems will be considered section 3 as well as specific anatomical regions that are tied to conscious versus unconscious vision as a case study section 4.
2. Methods for Tracking Consciousness
For specific consciousness, fine-grained manipulations of neural representations will be examined that plausibly shift and modulate the contents of perceptual experience section 5. It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C?
How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
If any problem qualifies as the problem of consciousness, it is this one. Chalmers The Hard Problem can be specified in terms of generic and specific consciousness Chalmers In both cases, Chalmers argues that there is an inherent limitation to empirical explanations of phenomenal consciousness in that empirical explanations will be fundamentally either structural or functional, yet phenomenal consciousness is not reducible to either.
This means that there will be something that is left out in empirical explanations of consciousness, a missing ingredient see also the explanatory gap [Levine ]. There are different responses to the hard problem. One response is to sharpen the explanatory targets of neuroscience by focusing on what Chalmers calls structural features of phenomenal consciousness, such as the spatial structure of visual experience, or on the contents of phenomenal consciousness. When we assess explanations of specific contents of consciousness, these focus on the neural representations that fix conscious contents.
These explanations leave open exactly what the secret ingredient is that shifts a state with that content from unconsciousness to consciousness. On ingredients explaining generic consciousness, a variety of options have been proposed see section 3 , but it is unclear whether these answer the Hard Problem, especially if any answer to that the Problem has a necessary condition that the explanation must conceptually close off certain possibilities, say the possibility that the ingredient could be added yet consciousness not ignite as in a zombie, a creature without phenomenal consciousness see the entry on zombies.
Indeed, some philosophers deny the hard problem see Dennett for a recent statement. Perhaps the most common attitude for neuroscientists is to set the hard problem aside. Instead of explaining the existence of consciousness in the biological world, they set themselves to explaining generic consciousness by identifying neural properties that can turn consciousness on and off and explaining specific consciousness by identifying the neural representational basis of conscious contents.
Identifying correlates is an important first step in understanding consciousness, but it is an early step. After all, correlates are not necessarily explanatory in the sense of answering specific questions posed by neuroscience. That one does not want a mere correlate was recognized by Chalmers who defined an NCC as follows:. An NCC is a minimal neural system N such that there is a mapping from states of N to states of consciousness, where a given state of N is sufficient under conditions C , for the corresponding state of consciousness. One wants a minimal neural system since, crudely put, the brain is sufficient for consciousness but to point this out is hardly to explain consciousness even if it provides an answer to questions about sufficiency.
The emphasis on sufficiency goes beyond mere correlation, as neuroscientists aim to answer more than the question: What is a neural correlate for conscious phenomenon C? Perhaps more specifically: What neural phenomenon is causally sufficient for consciousness? After all, assume that the NCC is type identical to a conscious state. Thus, some correlated effects will not be explanatory. For example, citing the effects of consciousness will not provide causally sufficient conditions for consciousness.
In other contexts, neuroscientists speak of the neural basis of a phenomenon where the basis does not simply correlate with the phenomenon but also explains and possibly grounds it. However, talk of correlates is entrenched in the neuroscience of consciousness, so one must remember that the goal is to find the subset of neural correlates that are explanatory, in answering concrete questions. Reference to neural correlates in this entry will always mean neural explanatory correlate of consciousness on occasion, I will speak of these as the neural basis of consciousness.
That is, our two questions about specific and generic consciousness focus the discussion on neuroscientific theories and data that contribute to explaining them. This project allows that there are limits to neural explanations of consciousness, precisely because of the explanatory gap Levine Since studying consciousness requires that scientists track its presence, it will be important to examine various methods used in neuroscience to isolate and probe conscious states.
Scientists primarily study phenomenal consciousness through subjective reports. We can treat reports in neuroscience as conceptual in that they express how the subject recognizes things to be, whether regarding what they perceive perceptual or observational reports, as in psychophysics or regarding what mental states they are in introspective reports.
Subjective reports of conscious states draw on distinctively first-personal access to that state. The subject introspects. Introspection raises questions that science has only recently begun to address systematically in large part because of longstanding suspicion regarding introspective methods.
Introspection was judged to be an unreliable method for addressing questions about mental processing. This makes it difficult to address long-standing worries about introspective reliability regarding consciousness. In science, questions raised about the reliability of a method are answered by calibrating and testing the method. This calibration has not been done with respect to the type of introspection commonly practiced by philosophers. A scientist might worry that philosophical introspection merely recycles rejected methods of a century ago, indeed without the stringent controls or training imposed by earlier psychologists.
How can we ascertain and ensure the reliability of introspection in the empirical study of consciousness? One way to address the issue is to connect introspection to attention. Philosophical conceptions of introspective attention construe it as capable of directly focusing on phenomenal properties and experiences. As this idea is fleshed out, however, it is clearly not a form of attention studied by cognitive science, for the posited direct introspective attention is neither perceptual attention nor what psychologists call internal attention e. Calibrating introspection as it is used in the science of consciousness would benefit from concrete models of introspection, models we lack see Spener , for a general form of calibration.
One philosophical tradition links introspection to perceptual attention, and this allows construction of concrete models informed by science. Look at a tree and try to turn your attention to intrinsic features of your visual experience. Harman This is related to a proposal inspired by Gareth Evans : in introspecting perceptual states, say judging that one sees an object, one draws on the same perceptual capacities used to answer the question whether the object is present.
Further, the advantage of this proposal is that questions of reliability come down to questions of the reliability of psychological capacities that can be empirically assessed, say perceptual, attentional and conceptual reliability. Introspection can be reliable. Successful clinical practice relies on accurate introspection as when dealing with pain or correcting blurry vision in optometry. The success of medical interventions suggests that patient reports of these phenomenal states are reliable. Further, in many of the examples to be discussed, the perceptual attention-based account provides a plausible cognitive model of introspection.
Subjects report on what they perceptually experience by attending to the object of their experience, and where perception and attention are reliable, a plausible hypothesis is that their introspective judgments will be reliable as well. Accordingly, I assume the reliability of introspection in the empirical studies to be discussed. Still, given that no scientist should assert the reliability of a method without calibration, introspection must be subject to the same standards. There is more work to be done. Introspection illustrates a type of cognitive access, for a state that is introspected is access conscious.
This raises a question that has epistemic implications: is access consciousness necessary for phenomenal consciousness? If it is not, then there can be phenomenal states that are not access conscious, so are in principle not reportable. That is, phenomenal consciousness can overflow access consciousness Block Access is tied to attention.
For example, the Global Workspace theory of consciousness understands consciousness in terms of access section 3. So, the necessity of attention for phenomenal consciousness is entailed by the necessity of access for phenomenal consciousness. Many scientists of consciousness take there to be evidence for no phenomenal consciousness without access and little if no evidence of phenomenal consciousness outside of access.
An important set of studies focuses on the thesis that attention is a necessary gate for phenomenal consciousness, where attention is tied to access. Call this the gatekeeping thesis. To assess that evidence, we must ask: what is attention? An uncontroversial conception of attention is that it is subject selection of a target to inform task performance Wu b. The experimental studies thought to support the necessity of attention for consciousness draw on this conception. This approach tests necessity by ensuring through task performance that the subject is not attending to S. One then measures whether the subject is aware of S by observing whether the subject reports it.
If the subject does not report S , then the hypothesis is that failure of attention to S explains the failure of conscious awareness of S and hence the failure of report. During the task, a person in a gorilla costume walks across the scene. Half of the subjects fail to notice and report the gorilla, this being construed as evidence for the absence of visual awareness of the gorilla. Hence, failure to attend to the gorilla is said to render subjects phenomenally blind to it. The gatekeeping thesis holds that attention is necessary for consciousness, so that removing it from a target eliminates consciousness of it.
Yet there is a flaw in the methodology. To report a stimulus, one must attend to it, i. The experimental logic requires eliminating attention to a stimulus S to test if attention is a necessary condition for consciousness e. Yet even if the subject were conscious of S , when attention to S is eliminated, one can predict that the subject will fail to act report on S since attention is necessary for report. The observed results are actually consistent with the subject being conscious of S without attending to it, and thus are neutral between overflow and gatekeeping.
Instead, the experiments concern parameters for the capture of attention and not consciousness. While those antagonistic to overflow have argued that it is not empirically testable M. After all, to test the necessity of attention for consciousness, we must eliminate attention to a target while gathering evidence for the absence of consciousness.whindurchbackconel.ml
Could computers and robots become conscious? If so, what happens then?
How then can we gather the required evidence to assess competing theories? They presented subjects either with stimuli moving in opposite directions or stimuli of different luminance values, one stimulus in each pair presented separately to each eye. This induces binocular rivalry, an alternation in which of the two stimuli is visually experienced see section 5.
Where the stimuli involved motion, subjects demonstrated optokinetic nystagmus where the eye slowly moves in the direction of the stimulus and then makes a fast, corrective saccade ballistic eye movement in the opposite direction. Similarly, for stimuli of different luminance, the pupils would dilate, being wider for dimmer stimuli, and narrower for brighter stimuli, again correlating with subjective reports of the intensity of the stimulus.
They seem to provide a way to track phenomenal consciousness even when access is eliminated. Once it is validated, monitoring this reflex can provide a way to substitute for subjective reports within that paradigm. One cannot, however, simply extend the use of no-report paradigms outside the behavioral contexts within which the method is validated.
With each new experimental context, we must revalidate the measure with introspective report. Can we use no report paradigms to address whether access is necessary for phenomenal consciousness? A likely experiment would be one that validates no-report correlates for some conscious phenomenon P in a concrete experimental context C. With this validation in hand, one then eliminates accessibility and attention with respect to P in C.
If the no-report correlate remains, would this clearly support overflow? Perhaps, though gatekeeping theorists likely will respond that the result does not rule out the possibility that phenomenal consciousness disappears with access consciousness despite the no-report correlate remaining. For example, the reflexive response and phenomenal consciousness might have a common cause that remains even if phenomenal consciousness is selectively eliminated by removing access. A standard approach is to have subjects perform a task, say perceptual discrimination of a stimulus, and then indicate how confident they are that their perceptual judgment was accurate.
How is metacognitive assessment of performance tied to consciousness? The metacognitive judgment reflects introspective assessment of the quality of perceptual states and can provide information about the presence of consciousness. If subjects accurately respond to the stimulus but showed no difference in metacognitive confidence in respect of the quality of perception of the target versus of the blank, this would provide evidence of the absence of consciousness in vision effectively, blindsight in normal subjects; section 4. Interestingly, Peters and Lau found no evidence for unconscious vision in their specific paradigm.
One concern with metacognitive approaches is that they also rely on introspection Rosenthal ; see also Sandberg et al. If metacognition relies on introspection, does it not accrue all the disadvantages of the latter? One advantage of metacognition is that it allows for psychophysical analysis. There has also been work done on metacognition and its neural basis.
Alternatively, information about confidence might be read out by other structures, say prefrontal cortex see section 3. Metacognitive and introspective judgments result from intentional action, so why not look at intentional action, broadly construed, for evidence of consciousness? Often, when subjects perform perception guided actions, we infer that they are relevantly conscious. It would be odd if a person cooks dinner and then denies having seen any of the ingredients. That they did something intentionally provides evidence that they were consciously aware of what they acted on.
An emphasis on intentional action embraces a broader evidential basis for consciousness. Consider the Intentional Action Inference to phenomenal consciousness:. If some subject acts intentionally, where her action is guided by a perceptual state, then the perceptual state is phenomenally conscious. An epistemic version takes the action to provide good evidence that the state is conscious.
Notice that introspection is typically an intentional action so it is covered by the inference. In this way, the Inference effectively levels the evidential playing field: introspective reports are simply one form among many types of intentional actions that provide evidence for consciousness. Those reports are not privileged. The intentional action inference and no-report paradigms highlight the fact the science of consciousness has largely restricted its behavioral data to one type of intentional action, introspection. What is the basis of privileging one intentional action over others?
Consider the calibration issue. For many types of intentional action deployed in experiments, scientists can calibrate performance by objective measures such as accuracy. This has not been done for introspection of consciousness, so scientists have privileged an uncalibrated measure over a calibrated one. This seems empirically ill-advised.
On the flip side, one worry about the intentional action inference is that it ignores guidance by unconscious perceptual states see sections 4 and 5. The Intentional Action Inference is operative when subjective reports are not available. A patient in the vegetative state appears at times to be wakeful, with cycles of eye closure and eye opening resembling those of sleep and waking.
As a rule, the patient can breathe spontaneously and has a stable circulation. The state may be a transient stage in the recovery from coma or it may persist until death. Working Party RCP Unlike vegetative state patients, minimally conscious state patients seemingly perform intentional actions.
Recent work suggests that some patients diagnosed as in the vegetative state are conscious. Owen et al. The commands were presented at the beginning of a thirty-second period, alternating between imagination and relax commands. The patient demonstrated similar activity when matched to control subjects performing the same task: sustained activation of the supplementary motor area SMA was observed during the motor imagery task while sustained activation of the parahippocampal gyrus including the parahippocampal place area PPA was observed during the spatial imagery task.
Note that these tasks probe specific contents of consciousness by monitoring neural correlates of conscious imagery. Deciding whether there is phenomenality in a mental representation implies putting a boundary—drawing a line—between different types of representations…We have to start from the intuition that consciousness in the phenomenal sense exists, and is a mental function in its own right.
That intuition immediately implies that there is also un conscious information processing. Lamme It is uncontroversial that there is unconscious information processing, say processing occurring in a computer. What Lamme means is that there are conscious and unconscious mental states representations. For example, there might be visual states of seeing X that are conscious or not section 4. To provide a gloss on the hypotheses: For the Global Neuronal Workspace, entry into the neural workspace is necessary and sufficient for a state or content to be consciousness.
For Recurrent Processing Theory, a type of recurrent processing in sensory areas is necessary and sufficient for perceptual consciousness, so entry into the Workspace is not necessary. For Higher-Order Theories, the presence of a higher-order state tied to prefrontal areas is necessary and sufficient for phenomenal experience, so recurrent processing in sensory areas is not necessary nor is entry into the workspace. For Information Integration Theories, a type of integration of information is necessary and sufficient for a state to be conscious.
One explanation of generic consciousness invokes the global neuronal workspace. Notice that the previous characterization does not commit to whether it is phenomenal or access consciousness that is being defined. The accessibility of information is then defined as its potential access by other systems. Dehaene Dehaene et al. Hence, only states in 3 are conscious. Figure Legend: The top figure provides a neural architecture for the workspace, indicating the systems that can be involved.
The lower figure sets the architecture within the six layers of the cortex spanning frontal and sensory areas, with emphasis on neurons in layers 2 and 3. Figure reproduced from Dehaene, Kerszberg, and Changeux Copyright National Academy of Sciences. The global neuronal workspace theory ties access to brain architecture. It postulates a cortical structure that involves workspace neurons with long-range connections linking systems: perceptual, mnemonic, attentional, evaluational and motoric.
What is the global workspace in neural terms? Long-range workspace neurons within different systems can constitute the workspace, but they should not necessarily be identified with the workspace. A subset of workspace neurons becomes the workspace when they exemplify certain neural properties. The workspace then is not a rigid neural structure but a rapidly changing neural network, typically only a proper subset of all workspace neurons. Consider then a neural population that carries content p and is constituted by workspace neurons. In virtue of being workspace neurons, the content p is accessible to other systems, but it does not yet follow that the neurons then constitute the global workspace.
A further requirement is that workspace neurons are 1 put into an active state that must be sustained so that 2 the activation generates a recurrent activity between workspace systems. Only when these systems are recurrently activated are they, along with the units that access the information they carry, constituents of the workspace. This activity accounts for the idea of global broadcast in that workspace contents are accessible to further systems. The global neuronal workspace theory provides an account of access consciousness but what of phenomenal consciousness?
There is, however, a potential confound.
We track phenomenal consciousness by access in introspective report, so widespread activity during reports of conscious experience correlates with both access and phenomenal consciousness. Correlation cannot tell us whether the observed activity is the basis of phenomenal consciousness or of access consciousness in report Block This remains a live question for as discussed in section 2. To eliminate the confound, experimenters ensure that performance does not differ between conditions where consciousness is present and where it is not. Still, the absence of observed activity by an imaging technique does not imply the absence of actual activity for the activity might be beyond the limits of detection of that technique.
A different explanation ties perceptual consciousness to processing independent of the workspace, with focus on recurrent activity in sensory areas. This approach emphasizes properties of first-order neural representation as explaining consciousness. Victor Lamme , argues that recurrent processing is necessary and sufficient for consciousness. Recurrent processing occurs where sensory systems are highly interconnected and involve feedforward and feedback connections. For example, forward connections from primary visual area V1, the first cortical visual area, carry information to higher-level processing areas, and the initial registration of visual information involves a forward sweep of processing.
Lamme holds that recurrent processing in Stage 3 is necessary and sufficient for consciousness. Thus, what it is for a visual state to be conscious is for a certain recurrent processing state to hold of the relevant visual circuitry. This identifies the crucial difference between the global neuronal workspace and recurrent processing theory: the former holds that recurrent processing at Stage 4 is necessary for consciousness while the latter holds that recurrent processing at Stage 3 is sufficient. Thus, recurrent processing theory affirms phenomenal consciousness without access by the global neuronal workspace.
In that sense, it is an overflow theory see section 2. Why think that Stage 3 processing is sufficient for consciousness? Given that Stage 3 processing is not accessible to introspective report, we lack introspective evidence for sufficiency. Lamme appeals to experiments with brief presentation of stimuli such as letters where subjects are said to report seeing more than they can identify in report Lamme It is not clear that this is strong motivation for recurrent processing, since the very fact that subjects can report seeing more letters shows that they have some access to them, just not access to letter identity.
Lamme also presents what he calls neuroscience arguments. This strategy compares two neural networks, one taken to be sufficient for consciousness, say the processing at Stage 4 as per Global Workspace theories, and one where sufficiency is in dispute, say recurrent activity in Stage 3. Lamme argues that certain features found in Stage 4 are also found in Stage 3 and given this similarity, it is reasonable to hold that Stage 3 processing suffices for consciousness. For example, both stages exhibit recurrent processing.
Global neuronal workspace theorists can allow that recurrent processing in stage 3 is correlated, even necessary, but deny that this activity is explanatory in the relevant sense of identifying sufficient conditions for consciousness. It is worth reemphasizing the empirical challenge in testing whether access is necessary for phenomenal consciousness sections 2.
The two theories return different answers, one requiring access, the other denying it. As we saw, the methodological challenge in testing for the presence of phenomenal consciousness independently of access remains a hurdle for both theories. A long-standing approach to conscious states holds that one is in a conscious state if and only if one relevantly represents oneself as being in such a state. For example, one is in a conscious visual state of seeing a moving object if and only if one suitably represents oneself being in that visual state.
The intuitive rationale for such theories is that if one were in a visual state but in no way aware of that state, then the visual state would not be conscious. Thus, to be in a conscious state, one must be aware of it, i. Higher-order theories merge with empirical work by tying high-order representations with activity in prefrontal cortex which is taken to be the neural substrate of the required higher-order representations. On certain higher-order theories, one can be in a conscious visual state even if there is no visual system activity, so long as one represents oneself as being in that state.
For example, on the higher-order theory, lesions to prefrontal cortex should affect consciousness Kozuch , testing the necessity of prefrontal cortex for consciousness. Against higher-order theories, some reports claim that patients with prefrontal cortex surgically removed maintain preserved perceptual consciousness Boly et al. This would lend support to recurrent processing theories that hold that prefrontal cortical activity is not necessary for consciousness. Bilateral suppression of prefrontal activity using transcranial magnetic stimulation also seems to selectively impair visibility as evidenced by metacognitive report Rounis et al.
IIT defines integrated information in terms of the effective information carried by the parts of the system in light of its causal profile. For example, we can focus on a part of the whole circuit, say two connected nodes, and compute the effective information that can be carried by this microcircuit. The system carries integrated information if the effective informational content of the whole is greater than the sum of the informational content of the parts.
Intuitively, the interaction of the parts adds more to the system than the parts do alone. On IIT, what matters is the presence of appropriate connections and not the number of neurons. A potential problem for IIT is that it treats many things to be conscious which are prima facie not in Other Internet Resources , see Aaronson a; for striking counterexamples and Aaronson b with a response from Tononi. For certain higher-order thought theories, having a higher-order state, supported by prefrontal cortex, without corresponding sensory states can suffice for conscious states.
In this case, the front of the brain would be sufficient for consciousness. Finally, the global neuronal workspace, drawing on workspace neurons that are present across brain areas to form the workspace, might be taken to straddle the difference, depending on the type of conscious state involved. They require entry into the global workspace such that neither sensory activity nor a higher order thought on its own is sufficient, i. What is clear is that once theories make concrete predictions of brain areas involved in generic consciousness, neuroscience can test them.
Work on unconscious vision provides an informative example. In the past decades, scientists have argued for unconscious seeing and investigated its brain basis especially in neuropsychology , the study of subjects with brain damage. Interestingly, if there is unconscious seeing, then the intentional action inference must be restricted in scope since some intentional behaviors might be guided by unconscious perception section 2.
That is, the existence of unconscious perception blocks a direct inference from perceptually guided intentional behavior to perceptual consciousness. The case study of unconscious vision promises to illuminate more specific studies of generic consciousness along with having repercussions for how we attribute conscious states. Since the groundbreaking work of Leslie Ungerleider and Mortimer Mishkin , scientists divide primate cortical vision into two streams: dorsal and ventral for further dissection, see Kravitz et al.
The dorsal stream projects into the parietal lobe while the ventral stream projects into the temporal lobe see Figure 1. Controversy surrounds the functions of the streams. Ungerleider and Mishkin originally argued that the streams were functionally divided in terms of what and where : the ventral stream for categorical perception and the dorsal stream for spatial perception. There continues to be debate surrounding the Milner and Goodale account Schenk and McIntosh but it has strongly influenced philosophers of mind.
Lesions to the dorsal stream do not seem to affect conscious vision in that subjects are able to provide accurate reports of what they see but see Wu a. Rather, dorsal lesions can affect visual-guidance of action with optic ataxia being a common result. Optic ataxic subjects perform inaccurate motor actions. Lesions in the ventral stream disrupt normal conscious vision, yielding visual agnosia, an inability to see visual form or to visually categorize objects Farah Dorsal stream processing is said to be unconscious.
If the dorsal stream is critical in the visual guidance of many motor actions such as reaching and grasping, then those actions would be guided by unconscious visual states. The visual agnosic patient DF provides critical support for this claim. Like other visual agnosics with similar lesions, DF is at chance in reporting aspects of form, say the orientation of a line or the shape of objects.
From Genius to Madness
Nevertheless, she retains color and texture vision. Strikingly, DF can generate accurate visually guided action, say the manipulation of objects along specific parameters: putting an object through a slot or reaching for and grasping round stones in a way sensitive to their center of mass. Simultaneously, DF denies seeing the relevant features and, if asked to verbally report them, she is at chance. What is uncontroversial is that there is a division in explanatory neural correlates of visually guided behavior with the dorsal stream weighted towards the visual guidance of motor movements and the ventral stream weighted towards the visual guidance of conceptual behavior such as report and reasoning see section 5.
A substantial further inference is that consciousness is segregated away from the dorsal stream to the ventral stream. How strong is this inference? Recall the intentional action inference. In performing the slot task, DF is doing something intentionally and in a visually guided way. For control subjects performing the task, we conclude that this visually guided behavior is guided by conscious vision.
Indeed, a folk-psychological assumption might be that consciousness informs mundane action Clark ; for a different perspective see Wallhagen Since DF shows similar performance on the same task, why not conclude that she is also visually conscious? DF denies seeing features she is visually sensitive to in action. Should introspection then trump intentional action in attributing consciousness?
Two issues are worth considering. The first is that introspective reports involve a specific type of intentional action guided by the experience at issue. One type of intentional behavior is being prioritized over another in adjudicating whether a subject is conscious. What is the empirical justification for this prioritization? The second issue is that DF is possibly unique among visual agnosics. It is a substantial inference to move from DF to a general claim about the dorsal stream being unconscious in neurotypical individuals see Mole for arguments that consciousness does not divide between the streams and Wu for an argument for unconscious visually guided action in normal subjects.
What this shows is that the methodological decisions that we make regarding how we track consciousness are substantial in theorizing about the neural bases of conscious and unconscious vision. A second neuropsychological phenomenon also highlighting putative unconscious vision is blindsight which results from lesions in primary visual cortex V1 typically leading to blindness over the part of visual space contralateral to the sight of the lesion Weiskrantz For example, left hemisphere V1 deals with right visual space, so lesions in left V1 lead to deficits in seeing the right side of space.
Subjects then report that they cannot see a visual stimulus in the affected visual space. For example, a blindsight patient with bilateral damage to V1 i. Blindsight patients see in the sense of visually discriminating the stimulus to act on it yet deny that they see it. Like DF, blindsighters show a dissociation between certain actions and report, but unlike DF, they do not spontaneously respond to relevant features but must be encouraged to generate behaviors towards them.
The neuroanatomical basis of blindsight capacities remains unclear. Certainly, the loss of V1 deprives later cortical visual areas of a normal source of visual information. Still, there are other ways that information from the eye bypasses V1 to provide inputs to later visual areas. Alternative pathways include the superior colliculus SC , the lateral geniculate nucleus LGN in the thalamus, and the pulvinar as likely sources. Figure Legend: The front of the head is to the left, the back of the head is to the right. One should imagine that the blue-linked regions are above the orange-linked regions, cortex above subcortex.
V4 is assigned to the base of the ventral stream; V5, called area MT in nonhuman primates, is assigned to the base of the dorsal stream. The latter two have direct extrastriate projections projections to visual areas in the occipital lobe outside of V1 while the superior colliculus synapses onto neurons in the LGN and pulvinar which then connect to extrastriate areas Figure 3.
Which of these provide for the basis for blindsight remains an open question though all pathways might play some role Cowey ; Leopold If blindsight involves nonphenomenal, unconscious vision, then these pathways would be a substrate for it, and a functioning V1 might be necessary for normal conscious vision. Campion et al. In their reports, blindsight subjects feel like they are guessing about stimuli they can objectively discriminate.
Consider trying to detect something moving in the brush at twilight versus at noon. In the latter, the signal will be greatly separated from noise the object will be easier to detect while in the former, the signal will not be the object will be harder to detect. Yet in either case, one might operate with a conservative response criterion, say because one is afraid to be wrong. Further, blindsight patients are more conservative in their response so will be apt to report the absence of a signal by saying that they do not see the relevant stimulus even though the signal is there, and they can detect it, as verified by their above chance visually guided behavior.
This possibility was explicitly tested by Azzopardi and Cowey with the well-studied blindsight patient, GY. They compared blindsight performance with normal subjects at threshold vision using signal detection measures and found that with respect to motion stimuli, the difference between discrimination and detection used to argue for blindsight can be explained by changes in response criterion, as Campion et al.
That is, GYs claim that he does not see the stimulus is due to a conservative criterion and not to a detection incapacity. In introspecting, what concepts are available to subjects will determine their sensitivity in report. In many studies with blindsight, subjects are given a binary option: do you see the stimulus or do you not see it? The concern is that the do not see option would cover cases of degraded consciousness that subjects might be unwilling to classify as seeing due to a conservative response criterion.
So, what if subjects are given more options for report? As visibility increased, so did performance. When the scale was used with a blindsight patient Overgaard et al. A live alternative hypothesis is that blindsight does not present a case of unconscious vision, but of degraded conscious vision with a conservative response bias that affects introspection. At the very least, the issue depends on how introspection is deployed, a topic that deserves further attention see Phillips for further discussion of blindsight.
Blindsight and DF show that damage to specific regions of the brain disrupts normal visual processing, yet subjects can access visual information in preserved visual circuits to inform behavior despite failing to report on the relevant visual contents. The received view is that these subjects demonstrate unconscious vision. One implication is that the normal processing in the ventral stream, tied to normal V1 activity, plays a necessary role in normal conscious vision.
Another is that dorsal stream processing or visual stream processing that bypasses V1 via subcortical processing yields only unconscious visual states. This points to a set of networks that begin to provide an answer to what makes visual states conscious or not. An important further step will be to integrate these results with the general theories noted earlier section 3. Still, the complexities of the empirical data bring us back to methodological issues about tracking consciousness and the following question: What behavioral data should form the basis of attributions of phenomenal consciousness?
The intentional action inference is used in a variety of cases to attribute conscious states, yet the results of the previous sections counsel us to be wary of applying that inference widely. After all, some intentional behavior might be unconsciously guided. In the case of DF, we noted that unlike many other visual agnosics, she can direct motor actions towards stimuli that she cannot explicitly report and which she denies seeing. In her case, we prioritize introspective reports over intentional action as evidence for unconscious vision.
Yet, one might take a broader view that vision for action is always conscious and that what DF vividly illustrates is that some visual contents dorsal stream are tied directly to performance of intentional motor behavior and are not directly available to conceptual capacities deployed in report. In contrast, other aspects of conscious vision, supported by the ventral stream, are directly available to guide reports.
This functional divergence is explained by the anatomical division in cortical visual processing.
- Derniers numéros.
- Submission history.
- Mathematical Modeling and Statistical Methods for Risk Management!
- Artificial Intelligence, Computation, Physical Law, and Consciousness.
- Chemistry of Paper?
- No Enemy But Time.
For some time now, these striking cases have been taken as clear cases of unconscious vision and if this hypothesis is correct, the work has begun to identify visual areas critical for creating seeing, sometimes conscious and sometimes not. The neuroanatomy demonstrates that visually-guided behavior has a complex neural basis involving cortical and subcortical structures that demonstrate a substantial level of specialization.
Related Actual Consciousness
Copyright 2019 - All Right Reserved