y2clutch / raw_transcripts /lecture_13.txt
elyx's picture
initial commit
dae990d
raw
history blame contribute delete
46.3 kB
Okay.
So my name is Steve Fleming, and I'm going to
be giving you a lecture on consciousness today.
I am.
So what's the focus of this lecture is is the
problem of perceptual awareness.
So imagine your standing on whatever bridge this would be
in London.
Mostly bridges, maybe.
And you're looking at the sunset.
Then you will also be most likely to be aware
of that sunset and be able to communicate its properties
to other people, to your friends and so on.
But at the same time, there's a lot of other
perceptual inputs that you may well be unaware of, such
as the feeling of the clothes on your skin or
changes in your posture.
And a cool question in consciousness science is what are
the computations in the brain that differentiate between conscious and
unconscious information and what are the mechanisms, what the neural
mechanisms that supports that difference.
Now, when you hear the word consciousness, people often start
thinking about mysterious phenomena.
So in the media you might hear about panpsychism or
plants, consciousness and so on.
So we're not going to be encroaching on this territory
today.
So there are some somewhat out there consciousness that are
doing the rounds and we can't rule them out 100%.
But the approach I'm going to tell you about today
is taking an approach very much squarely within cognitive psychology
and neuroscience.
And the questions that we can tackle with experiments in
the lab are these ones.
So what differentiates Conscious from unconscious processing as a neural
level?
What is it in the brain that makes a difference
for conscious processing?
What's the difference that makes the difference?
So in today's lecture, there's quite a lot to get
through.
There's a bit of material towards the end of the
lecture, which is some unpublished work from my lab that
is optional.
It won't be examined, for instance, is not published, but
if we get there, then I can talk about it,
but I might skip over that if we are short
of time.
So we're going to cover these topics.
Conscious the difference between conscious level and conscious concepts, and
talk briefly about some methods for manipulation in consciousness in
the lab.
Talk briefly about what's been found, found out about the
neural collective consciousness, and then I'll talk about some theoretical
issues, such as the importance of controlling for performance.
And then at the end, we'll look at some ethical
issues that arise.
Well, it's now become possible to detect the presence or
absence of consciousness in non responsive individuals.
So what do we mean by the difference between consent
and level of consciousness?
So the idea here is that we differentiate in the
level of consciousness from, say, sleep to wake you go
you become unconscious when you are in a dreamless sleep,
you then maybe become conscious of your dreams, and then
when you wake up in the morning, you're fully conscious
of the outside world.
So that's different, a difference in the level of consciousness.
But the idea is that even when conscious level is
constant, so even when you are awake and engaged in
your surroundings, then the content of your consciousness might fluctuate
over time.
So you might be conscious of my voice right now,
but in a few minutes, if you zone out for
a few seconds, think about something else.
You might not be conscious of my voice in in,
in, in that in that moment of time.
So the question that we're going to focus on in
the main question we have to focus on today is
what contributes to conscious experience over and above simple information
processing.
The information is getting processed to some level.
We know that from experiments, so I'll talk about next.
But sometimes it's conscious and sometimes it's not.
What makes what underpins that difference.
So a lot of the work that's been done to
study consciousness in the lab has been a variant on
a paradigm called visual masking, which some of you may
have heard about, say masking is quite simple.
The idea is that you present a stimulus on a
screen and then a very short time later you present
a mask.
And the the time interval between the stimulus and the
mask is known as the entire stimulus interval is I.
And sometimes you'll see that written in papers.
Is the stimulus onset a synchrony or the way those
two terms are interchangeable?
And that's in say, Sorry, I should have told you
to be ready for this.
So if you have a look at the screen now,
I'm going to flash up an example of a mask.
So some hashtags followed by the stimulus, followed by the
happens again.
And you should be able to see this one.
Let's just back up.
So we're ready.
Here we go.
Everyone see what the word was.
Okay.
So that was a relatively long as.
So the word is visible.
This is now a shorter AISI.
And it isn't even sure to ISI.
He put your hand up if you saw the.
Okay.
So about 50%.
So that's that's.
So, I mean, PowerPoint is not the best technology for
presenting these kind of stimuli, but as easy as the
mask and the stimulus interval decreases, it becomes harder and
harder to see.
The word that last one was orange for those of
you who saw it.
So the masking effectiveness has been studied in a number
of studies.
It depends on the timing, as we've seen.
It also depends on the stimulus intensity.
So if I keep the icy effects and drop the
contrast of the how black or grey that stabilises, then
I can increase the masking efficacy and also stimulus content.
So interestingly, things like your own name or emotion always
will jump out even the same level of masking.
And there are also individual differences, as we saw.
So some people saw it, some people didn't.
This is obviously not a control.
It's like a business experiment.
But when we do controls like this on this, we
see individual differences in masking threshold.
Now, this would be less interesting if this just meant
that you didn't see the work.
It wasn't even processed on your retina, for instance.
That would be less interesting because that just means the
stimulus hasn't got into the system.
But we know from a number of studies in cognitive
psychology that mass stimuli can affect behaviour even when people
say they didn't see it.
So often this is done using what's known as an
indirect test of and the processing of the words.
So you might go through an initial phase of an
experiment where you get flash words or different stimuli that
were masked.
And on some trials of that experiment, you might say,
I didn't see it.
I don't know what the word was.
But then in a second, indirect test, for instance, this
is an example from ten up in the eighties.
If you then ask people to say whether somebody is
a word or a non word, you'll be faster.
If the word is semantically related to the last word
that you claimed not to see.
So in this case, banana is semantically related to orange
and you're faster to say that you banana is a
word.
So that indicates some depth of processing.
It's not just that it's failed to get into the
system on the retina.
It's got into the system.
But people still claim that on the Web.
Now, more recently, people have used brain imaging to show
that it's at least into the visual system.
And even they're processed into, say, areas dealing with language.
So this is an experiment from Garrett Research Group in
Queen Square.
And what they did here was use masking to make
the orientation of these lines invisible.
And they used a slightly more sophisticated masking procedure so
they could, rather than just flush it once, they could
flush it continuously.
And people would still claim not to see the orientation
of the grating.
Right.
So that what they're actually being flashed is a left
tilted or a right tilted grating.
But they just see a mask, applied mask.
And what you see on the right, that it's a
machine learning classifier that's trained to try and decode the
true orientation of the invisible grating from activation of different
brain areas.
And as you can see, they can decode above channels
and the visual cortex.
So that suggests that the grating is being processed in
early visual cortex, even though the subjects themselves say they
didn't see it.
So that's some evidence, both behavioural and hopeful.
The processing of stimuli without awareness.
Then what happens when you become aware of it?
This is the other side of the coin.
And so when you do this, you can show that
when you see a masked word.
Then you get elevated activation in a widespread network in
the frontal cortex, whereas when the word is invisible, then
the activation is much more restricted to early visual areas.
I can't find a pointer, but not read is maybe
I can come up.
Yeah, I think.
Yes.
Good.
So this.
This.
This red blob on the right hand side here is
the activation you would get in individual cortex X to
straight cortex when word is invisible compared to baselines, suggesting,
again, like any experiment, some processing of my stimuli in
early visual areas.
But when you become aware of it, when you see
it, you see it and you get much more widespread
activation in your friends across the network.
These are experiments.
These are both experiments from Standard Hands Group reviewed in
this paper newsroom.
Again, you get a similar pattern when it's in the
auditorium with a mask.
Or interestingly, in white noise.
Sometimes you say you see it, you heard it, sometimes
you didn't.
When you didn't.
When you say you didn't hear, hear it, you get
some activation in auditory cortex suggesting some data processing.
But when you say you heard it and you get
much more widespread activation in front of prosecutors.
We'll come back to what this means in a second.
Just for completeness, another popular technique for manipulating an awareness
of the stimulus is binocular rivalry.
So often this is done using red green goggles.
So you present an overlapping stimulus.
So the house is in red here, the face is
in green.
If you put red green goggles on, the images compete
for dominance between the two eyes.
Sometimes you see the faces, sometimes you see the house.
But the stimulus that's on the screen is identical in
both cases.
And so this is useful because then you can create
this kind of phenomenon where there's an unchanging stimulus, but
your perception is changing.
And when the stimulus is unconscious, we can also track
it.
We can then track its influence on information processing.
So just to give you one example of work in
this.
So here is a study from Dell PCS Group.
What they did here was used by knock the rivalry
to mask the movement of some moving dots.
So this is the this is the stimulus that people
actually see in the other eye.
What they don't see is some coherently moving dots.
And then after this initial period of binocular rivalry, they
have to make a decision about some consciously moving dots.
And what they find is that if the unconscious information
is helpful and people will be better at that decision
when it's coherent in red than when it's just random
and in green.
But interestingly, this so this unconscious information boost your performance.
So again, it shows the process, but it doesn't change
people's confidence in that decision.
So to them, subjectively, subjectively, it feels as though it's
just the same in both the helpful and the unhelpful
cases.
But in the helpful cases, even though the information is
unconscious, it's actually making your decision better.
So we can then ask and we've already seen some
of this information already.
What are the what are the new correlates of consciousness?
So we know that we can input information into the
system that is sometimes unconscious.
So then we want to know what's the difference between
when you're conscious of a stimulus and when you're not.
And so the neural correlates of consciousness have been defined
in the early 2000s as the minimal set of neural
events that are sufficient for a specific conscious experience.
So the idea here is to keep the stimulus inputs
similar, but contrast conditions between when you say you were
white or something and when you were unaware of something.
Okay, so we've already seen this slide.
So this is the idea.
When you're aware of something, there is a global ignition
through the brain, through the frontal and the price of
cortex.
And when you're not aware of that stimulus, you don't
get that global ignition.
So just to give you an detailed example of this,
this is actually from a study using EEG and combined
with Meg.
And so here they were able to look at the
fine grained temporal dynamics of what happens, what the fate
of a stimulus when you show it on the screen,
either when someone says they saw it or when they
didn't see it.
Yeah.
Yeah.
See, that's what you.
Right.
Sorry.
That's actually prefrontal cortex.
But you should.
There is also activation in the parts of what occurs
in the prefrontal cortex and the Bronco area where they
are.
That's right.
That's right.
Yeah.
And so, again, they used this masking procedure with a
variable delay, stimulus onset synchrony.
And what they could then do is plot these activity
time courses as a function of this asset way.
Right.
So people gradually became more and more aware of the
stimulus as the way increased.
And what they found was that in early in early
visual cortex, see this this box here, there was a
pretty linear increase with SLA.
So it's like the stimulus is getting a bit stronger,
a bit stronger, a bit stronger in the early visual
areas.
But when you look at the prefrontal cortex signature later
in the trial, it's more all or nothing.
It's almost as if like on some trials, the whole
system is nice and you're conscious of it.
On other trials, it doesn't.
So this is this idea of a bifurcation response.
So it's like a nonlinear system where occasionally the stimulus
will trigger this ignition and make it into violence, and
other times it won't.
So there's been various models proposed with what might be
going on here.
So one of the most popular ones from the bars
and stand the hands grip.
Has been the global workspace theory of consciousness and the
idea that consciousness occurs once information is may be sequestered
in perceptual areas, gains access to a global neuronal workspace.
And that's proposed to be supported by these fronts of
prior to networks.
And so that would explain why you get ignition, because
when the information is weak or subliminal, it just kind
of reverberates around the visual cortex.
Then as it crosses the threshold for admission into consciousness,
you get recruitment of these fronts across all regions as
well.
An alternative view is the recurrent processing VAT that was
put forward by Ptolemy in the 2000.
Is that.
What happens is that when you get some visual input.
If it's very weak, then you'll just get feed, food
processing and you won't get any conscious person.
But if it's a bit stronger and long lasting, then
you'll get both people and feedback activation along the cortex.
And that supports consciousness.
And what Lamaze this additional ignition into the parts network
is secondary.
It's not causal or conscious experience, but it might be
involved in things like reporting that you've seen something.
So this is quite a deep divide between what are
called local theories of consciousness, which propose the activation in
recurrent loops within the perceptual system are sufficient for consciousness,
experience and global theories which propose you need to get
the information out into a global workspace for consciousness to
arise.
How.
And so so this this activation here would be just
for reporting your experience for assistance, but not for actually
being conscious of that experience.
Then you might be, for instance, you might be conscious
of something and then immediately forget it and be unable
to report it.
And the recovery process.
If you say that your your conscious is in the
moment that is supported by recurrent processing, even though you're
unable to report on it.
Now, personally, I find this new quite difficult to get
my head around because it would suggest that there could
be cases where the subject themselves says, I definitely wasn't
conscious of this thing.
But the neuroscience, as we say now, and I can
see your reverberating loops in your visual cortex, you must
have been conscious of it.
So I think this actually goes against the folk psychological
notion of consciousness is something we're able to communicate to
others.
But I think it's still worth taking seriously.
Okay.
So the final theory I just want to cover is
higher order theory.
This is got similarities with the global workspace theory.
And the idea is that there are first order representations
in perceptual systems, and these first order representations themselves are
not sufficient for conscious experience.
They might drive behaviour, they might allow you to respond
above chance on a task, but they're not sufficient for
conscious experience.
And instead the idea is that consciousness of that content
needs to have some higher order representation that allows the
system to become aware of that first order state.
So this entails that the first order representation in the
perceptual system should be monitored or matter represented by the
higher order representation.
And so that's another prominent view.
And again, with, you know, ongoing debate about which which
view is correct.
And new experiments are coming out all the time to
try and distinguish between them.
And one important issue that we need to do here
affects all these different theories is the problem of performance
compounds.
So when we look at the neural coral, it's a
conscious one.
When we look at the these emission signatures, are we
really just isolating consciousness or are we isolating the neural
correlates of the improved performance or information processing that often
accompanies consciousness?
Because we know that when we're aware, some people are
often able to use that for various functions like Oops.
Like language, working memory, communicating it and so on.
And the reason that is really important to control for
performance is also got its roots in philosophy and in
the theory of consciousness.
So the idea is that consciousness or sentience is not
the same as being sensitive to something.
I say we can think of plenty of systems that
are sensitive to the outside world without being conscious.
Your camera appliance, maybe a thermometer, those all those are
all sensitive to the outside world, but we don't usually
think of them as being conscious of what they are
sensitive to.
And because under some theories like higher order theory, these
first order representations are held to drive tough performance as
well as contribute to consciousness.
The problem is that if we have some experiment that
boosts consciousness in some way, like changing the soul in
a masking experiment, if we change both consciousness and performance
in tandem, then we don't know whether the neural or
behavioural change we see is due to the changes in
performance or the changes in consciousness.
And as I said, this is particularly crucial for testing
the predictions of higher order theories, because if performance is
not controlled, then we might unfairly stack the deck in
favour of these first order theories that might see correlates
of consciousness in only, only perceptual areas.
And if you're interested, this has been really nicely unpacked
in a book.
It's very accessible by heart for love.
So one reason to believe that we can do this
and control for performance and still study consciousness is the
phenomenon of Blind Side that was discovered by Larry Weiss
Krantz, who was a eminent psychologist at Oxford.
And what he did was study patients with damage to
the early visual cortex.
And so these patients would often have time is just
to one hemisphere caused by an injury.
And that led to their clinical visual field tests looking
something like this.
They'd be perfectly well aware of things on one side
of space, but they be essentially blind in the other
side of space.
So when they came into the doctors, when they went
to the apologists, they would be classed as having a
brain lesion causing blindness in one half of space.
But what's really interesting here is that when you study
them in more depth, patients with Blind Eye are actually
able to guess while above chance.
What is being presented in that Blind Army field.
Now, I won't show this just for the sake of
time.
If you're interested, you could watch a YouTube video of
a black patient doing exactly this.
The light lights are flashed in the blind field, and
when he's forced to guess where they were, he's often
close to 100% accurate, even though he himself will say,
I didn't see anything.
Right.
So that's a case of performance being high, but always
being zero.
And in experiments you can then adjust the stimuli in
both the normal and the blind hemi field.
So that performance is matched between the two hemi fields.
But now he's only aware of the stimuli in the
normal hemi field, not aware of the blind happy field
that creates an really nice, well-controlled case where performance is
matched in the two hemi fields.
There's no compounded performance now you're just he's able to
process the information just as well in both cases.
And yet awareness is only present for the stimulation one
field and not the other.
And when we then look at brain activation in relation
to this difference, you still see the controlling performance elevated
activation in the front supports.
In that way when you're presenting stimuli in the normal
impairs the blind.
HAVERFIELD.
It's also possible to do these kind of experiments in
otherwise healthy observers.
So this was a study of blind side in normal
subjects, in healthy subjects without a brain lesion.
And this was done using a masking procedure.
So participants were first asked to decide whether a diamond
or a square was presented on the screen.
And this was difficult because it was masked.
It was flashed very briefly.
And they then had to indicate whether they saw the
target or whether they simply guessed the answer.
And what was found in this experiment was that it
was possible to find two conditions.
Across the whole range of ways where performance was modest.
So that's the the red line here.
Performance at these two areas is matched.
But this is the way people say they saw it
less often than this as a way.
So you can see that down here performance is nicely
matched conditions.
Information processing is just as good.
But people are less aware of the stimulus that this
has away in this X-ray.
And when that contrast is then done between these two
ways within the fMRI scanner, you get localised activation in
the lateral prefrontal cortex in relation to increase in conscious
awareness.
And just a final example of this.
This is from, again, a study of patients with brain
lesions, but now with lesions to the prefrontal cortex.
So patients with prefrontal damage were asked to provide a
false choice of which of to stimuli were presented on
the screen and to rate the visibility of their stimuli.
So how aware they were.
And what's interesting is that compared to controls.
The subjective visibility of those stimuli was reduced in the
patients.
And that's even the case when performance is match between
the controls and on the patients.
Right.
So this is now plotting the the the difference in
visibility for trials on which the patients got correct patients
and controls got correct.
On the upper lines here on trials in which they
got incorrect here.
So even when you split out the trials, according to
ones that there were right and wrong, you still see
that subjective visibility is lower in the patients in grey
compared to the controls in black.
And what's interesting is that if you then correlate the
extent to which visibility was reduced in the patients with
the location of their brain lesion, you can get a
map like this.
So this is known as lesion lesions into mapping.
This is a map of the lesions that were most
correlated with the drop in consciousness threshold.
And here you get a evidence for a contribution of
damage to the prefrontal cortex, to the anterior prefrontal cortex,
to the threshold for conscious awareness.
There's one other compound that we need to think carefully
about here, and that's not just performance, but I mentioned
a while ago that you can find cases where people
are performing better in one case rather than the other
due to unconscious information, but their confidence level is not
different.
Now, the problem is that in a typical Moscow experiment,
when people say that they saw a stimulus, then they're
often more confident on those trials than when they say
they didn't see a stimulus.
And you can see that here.
This is actually some data from our lab when subjects
say they saw a massive stimulus.
And they're more confident on the y axis compared to
when they said that they didn't see a massive it.
So the problem is that all these existing findings in
the literature on the front surprise activation being related to
conscious awareness might be consistent with these brain areas, coding
for the visibility or your awareness of the stimulus.
But it could also be consistent with these brain regions
being involved in representing confidence in your decisions.
So to look at this.
This was work done by my pastry student, Martha mazur,
and post-doc Nadine Easter.
And what we did is apply a machine learning decoding
procedure across the whole brain to try and deep code
people's allowance of the stimulus where they said they saw
it or not, and also what the identity of that
stimulus was.
So we can decode the identity of the stimulus, whether
it was tilted to the left or the right in
early visual areas.
And we can then decode their awareness in front of
proximal cortex.
So that's consistent with the picture of global ignition when
people say they saw it.
You get more activation, more decoding of awareness in response
across the network.
But when we then control for confidence in this analysis,
when we artificially match the distributions of confidence on yes
and no trials when they said they were aware of
it or not, and a lot of this activation actually
disappeared.
So after DOWNSAMPLING to ensure confidence was matched on these
trials, there was no longer any visibility, decoding, any awareness,
decoding in large swathes of the pre-frontal cortex.
Now, it was possible to still decode awareness from some
subregions of of all time, such as the posterior medial
frontal cortex.
So I think this is a very recent study, and
we just presented this a conference over the summer and
published it a few weeks ago.
And so I think we're still figuring out how to
interpret this.
So there's I think there's two implications of this work.
So the first is that this is a big issue.
This is a big deal that these are two distinct
phenomena.
On the one hand, we have confidence formation, monitoring, metacognition,
thinking about whether you got an answer right or wrong.
And this has been confounding all the studies of awareness
in the literature.
And so we could need to control for that to
isolate a pure awareness signal.
And this has been this view has been supported by
people like Standard Hand who think that monitoring self awareness
of whether you get things right or wrong should be
considered as distinct to global broadcast.
The alternative view is one that I favour is that
there are shared computational substrates for both monitoring and wireless.
Essentially what we mean by that is the ability to
be aware of how things are being processed and that
includes being confident in a response that you get.
The very feasibility confidence can be defined in terms of
being confident in a first order representation.
So that means there might actually be shared mechanisms that
underpin both confidence and awareness, and therefore it's unsurprising that
much of the classical correlates of consciousness disappear when we
control for confidence, because that's what we should expect under
that view.
So this is an ongoing debate and it's not been
resolved.
Yes.
And so what I've described is that we now have
a number of empirical signatures of consciousness.
We also have a number of theories.
And I just want to tell you one thing about
what's happening at the moment and in, I guess, conscious
to science as a ongoing project.
So one thing that people are a bit worried about
is that.
These theories are somewhat siloed.
They're being tested by different labs that don't often talk
to each other.
And there was actually a really interesting study from Italy,
Iran and the moderates group in Israel that they collected
all these different papers on consciousness in the literature together.
And then what they did is they mined the text
of those papers and asked which theory is being tested
in those papers.
And slightly concerningly in papers that said that they were
testing recurrent processing theory.
They would often report evidence for activation in the visual
cortex in support of consciousness, whereas in papers that said
they were testing global workspace theory, they would often report
activations in the front sprouts of network consciousness.
This is a bit of a concern, right?
Because it can't be that they're both right.
So there seems to be quite a bit of bias
between some labs focussed on one area, some labs focussed
on another theory.
And I think that there are now ongoing, really interesting
ongoing initiatives such as adversarial collaborations that have tried to
stop this happening and try and get labs who are
favouring different theories to actually work together and test competing
predictions.
I also think there might be a deeper problem here
in consciousness science, and that is the fact that actually
the theories of consciousness are not really thinking about what
the functions of consciousness are.
So in this recent article I suggested that there is
a consciousness, a solutions in need of problems.
People are putting forward theories of how consciousness might work
in the brain, but they're not necessary thinking about why
consciousness exists in the first place, and that is at
odds with other fields of psychology, right?
So if you have a theory of memory, for instance,
then you want to know what that memory is being
useful.
How is it helping the organism survive?
And instead, in conscious decisions, we often rely, I think,
a bit too much on intuition about the kind of
experience we're trying to explain.
Just as I introduced in a stop, I started the
lecture by saying, Imagine you are on a bridge in
London and looking at something sunset.
That's very intuitive, but it's not really constrained functionally.
And this lack of functional constraints is a problem because
the test of a good theory in psychology or neuroscience
is whether it can explain how a system performs a
particular function, how a theory of vision explains how we
categorise objects, or how a theory of memory explains how
we remember and forget.
All right.
So I think we can take a lot from the
next levels of analysis from David Marr.
So this is the idea that if you're trying to
explain, for instance, how a bird flies, you first need
to know something about why it's trying to flap its
wings.
What is the goal of flights is to take off
and leave the ground.
And that then constrains your search for the algorithm that
does that job.
Maybe you're going to take off and leave the ground
with jet engines and fixed wings.
Or maybe you're going to take off and leave the
ground with flapping your wings.
And then you can think about how that might be
implemented at the physical level.
And people have suggested that in neuroscience and psychology, it's
useful to think about all these levels of analysis.
So both the levels of implementation, such as how it
works at the level of brain areas in seconds and
algorithms, so computation, but also and this thing, this is
the level that often gets missed out.
What's the goal of the hour?
Why is it there in the first place?
And one thing I think is useful to constrain theories
of consciousness is the notion that consciousness, at the very
least, seems to be for sharing information.
And this has been put forward by Chris Pratt, who
used to be.
He's emeritus professor at UCL.
And he writes, The conscious experience is the one outcome
of the brain's information processing that can be shared with
others.
I think it's very hard to disagree.
This is essentially the definition of what we mean by
conscious experience.
When I'm a stimulus and you don't see it and
you can't tell me what it was, that's the definition
of it.
There may be lots of things that are influencing our
behaviour unconsciously.
I can't tell you what they are.
If I could, I would be conscious of them.
The consciousness is at the very least for sharing.
And what's interesting is that this kind of idea is
floated around the literature, but being somewhat obscure for many
years now.
So this is a book chapter written by the famous
neuroscientist Horace Barlow.
And and it's not as widely known as Chris's work
because it appeared in this book chapter in 1997.
But he writes here, I think, things that are very
similar.
So what makes the pursuit of communal goals possible as
humans is our ability to communicate with each other, which
is surely the direct and obvious result of being conscious.
Because if we weren't conscious of what we're thinking about
and feeling, we couldn't share that with others.
On the current hypothesis conscious experience gives one communicating one's
own experience to others.
That is its purpose and survival value.
So what we're what we've been working on recently is
trying to drill down into the algorithms that might support
sharing of conscious information.
So first of all, it's useful to think about what
is being shared.
And at the least, I think we can think of
both content.
So I might share with you the fact that I'm
feeling a bit tired or hungry, or I can see
a bird over there or, you know, that nice content
of words.
I also might share with you the vividness of that
experience.
I might say to you, I just can't continue this
lecture because my headache is so strong.
That's a very strong experience dominating my conscious experience.
And these things are interchangeable, right?
So I might be vividly aware of having a headache
or dully aware of it.
Partially.
Well.
And this in philosophy is known as the idea of
mental strength.
It goes all the way back to David HUME.
Recently, this paper from George Moralez, who has suggested that
the idea that mental strength is a phenomenal magnitude is
the strength of vividness, of an experience.
And it said by all conscious experience and explains that
degree of intensity.
And indeed, it seems that this is capturing something deep
about what it means to be aware of different types
of mental content.
Because when you put people in an experiment where they
actually have to share information to succeed, they naturally fall
back on this sharing of mental strength.
So this is an experiment from Barbara Graham's group where
they asked people to sit and look at two different
computer screens.
They have a different visual task to do.
The task is not so important at the moment.
But the important thing is that they would then allow
us just chat to each other and come up with
a joint decision about what they saw on the screen
and the kind of words they used.
This was done in Denmark, but the translations there are
things like, you know, I, I, I see see this
very well, but these I didn't see anything will go
with yours because I saw nothing.
I took a guess, a wild guess so that communicating
degrees of experience, strengths of experience.
And by doing so, they can then achieve a better
performance together than the best individual could alone.
Okay.
I just want to give you a flavour of this
model.
This is the part that I said was, uh.
That could be, could be sketches.
This is very much ongoing work, and it's not the
kind of thing you would necessarily be expected to talk
about in an exam, for instance.
But the idea that we're working with in our lab
is that we can start building in a this this
notion of awareness of mental strength into a generative model
of perceptual content that we can simulate in the computer,
and then we can devise hypotheses that test this awareness
related computation against its implementation in the brain.
So the idea behind generative models, this is very broad,
and you might have heard this idea in vision science
is the idea that what the brain is doing is
essentially building a generative model of the incoming sensory data.
So it's trying to infer the best guess of what
it's seeing based on the incoming prediction errors and the
sending predictions.
So this is known as predictive coding theory, predictive processing,
and more generally it's known as the theory of generative
models.
But what's interesting about this architecture is that awareness of
the degree of phenomenal magnitude or awareness of content is
not in that a lot of this stuff is suggested
to be proceeding completely unconsciously.
That's why we're not aware of how that percept of
an animal is being formed, how what's called an unconscious
inference.
We just unconsciously infer that an apple is there in
front of us.
And so we might need to start thinking about how
we can extend out these models to include additional higher
order levels that monitor the extent to which the system
thinks there is content in its first order generative model.
So this is a higher order theory of consciousness.
And the idea here is that awareness states are abstractions
about the presence or absence of perceptual content, and that
might support communication of mental strength.
And I will now just skip over the experiment because
I wanted to go to the part on on the
ethics.
But you have the slides and if anyone has any
particular questions about this and feel free to to come
and talk to me afterwards.
As I said, this is very much ongoing work that's
not published yet.
And so what we're trying to do with this work
is reverse the arrow here.
So we have functional constraints on what awareness is for
and that we hope we're very much at the start
of this project will start to enable us to, rather
than just remain siloed, testing different favourite theories that are
built often on intuition, to actually develop a working model
of the minimal types of computation that might allow the
communication of mental strength, the communication of degrees of experience,
and then test that against behaviour and brain activity.
Okay.
So just to derive some interim conclusions.
So we've looked at how techniques such as visual masking
and binocular rivalry allow the precise manipulation of awareness to
simple stimuli.
Conversion evidence for unconscious processing of stimuli is provided by
people performing above chance on indirect measures of information processing,
such as priming or forced choice responding.
The Neural correlates Consciousness research program identifies awareness with front
surprise selectivity and recurrent processing.
But note that it's important whenever you are reading this
literature to assess whether there whether the research is a
much different potential composition of performance and confidence when assessing
the basis of awareness.
And finally, I am excited by the idea of adopting
a more functional perspective, just like we do in other
fields of psychology.
So what is consciousness for might and allow us to
build a conversation among the consciousness related processing.
Okay.
So mainly we've talked about this difference in conscious content.
We talked briefly about difference in conscious level at the
start.
But now I just want to turn to the problems
that arise when we're able to start detecting conscious level
independence of what clinicians call vigilance.
So people who are in a vegetative state or some
forms of advanced dementia that are non-responsive might actually be
vigilant in the sense that they're awake, their eyes are
open.
But they may have very little.
In a consciousness.
At least that's what a lot of ICU doctors think
about people who are in a vegetative state.
And.
What's fascinating and somewhat disturbing is that there are cases
where, while clinically is usually described as condition, is wakefulness.
Without awareness, a subset of patients may show no reliable
behavioural signs and yet be able to communicate known as
a minimally conscious state.
And this is famously described by Jean-Dominique OB, who is
a former editor.
Elle and then became locked in in a minimally conscious
state and went on to write this beautiful book, The
Diving Bell and the Butterfly, just by fluttering his eyelid
to indicate what words he wanted on the page.
A whole book that.
And the problem is that recovery from this vegetative state
after around one year is very rare and often involves
severe disability.
So it raises ethical issues because you might read in
the media sometimes about decisions, controversial decisions to be made
by the legal and medical professions, about whether it's right
to remove life support from someone who's in a vegetative
state.
That's actually how this is not living anymore.
Now, obviously, this changes a lot.
If science can come along and say, actually, maybe there
is some in a conscious experience there and maybe it's
a meaningful one.
And a real advance came in this area from work
done by Adrian Owens Love, who's now in Canada.
And he showed that in some vegetative state patients, when
you put them into a functional MRI scanner and you
ask them to imagine either walking around the house or
playing tennis, then the activations you got in, say, the
motor cortex and the networks such as the prior to
cortex involved in spatial navigation were very similar to those
you got in controls.
Being asked to imagine walking around the house, all things
tennis.
So that was taken as some evidence that the patients,
even though they're behaviourally unresponsive, may well be conscious to
the same degree as the controls.
Similarly, it's been possible to use non-invasive transcranial magnetic stimulation
to ping the brain and then look at how distributed
responses reverberate around the brain using EEG.
I won't go into all the details of this, but
essentially what you can do is then take these recordings,
compress it down and ask how complex is the activation
that is elicited?
This is known as the Participation complexity index PCI.
And when you plot this perturbation to index on the
y axis here as a sorted according to the different
patient groups, then healthy subjects are up here.
So you see complex brain responses and patients who are
confirmed vegetative state down here.
But what's interesting is these behaviourally unresponsive MCI patients sometimes
drift over into the healthy range and even some vegetative
state patients might start to be providing some evidence of
consciousness.
So I just want to end with a single case
study.
So this was reported in this paper made you know,
in reviewing a lot of this literature in 2013.
So this was a 26 year old male who'd had
a motor vehicle accident, was admitted to hospital in a
coma.
And over the next 12 years remaining consistent behaviour defined
vegetative state.
And in February 2012, he used I wouldn't use this
tiny house method to answer multiple externally verifiable questions so
you could ask the subject to imagine playing tennis.
Yes.
Or imagine play walking around your house for no.
And then he was able to answer using that non-invasive
brain imaging method his own what his name was, the
name of his support worker and so on.
These are things that.
You know, only he could have known and were able
to be verified by the medics.
And then what becomes very difficult is the same technique
could then be used to ask non verifiable questions that
might be important for quality of care, which is what
he wants to watch on TV and whether he is
in pain.
So there's clear ethical implications here.
I won't read all these out, but just to highlight
what sorry, just to highlight a couple of things.
So first of all, we might have an intuition that
that must be a terrible quality of life being locked
in.
All you could do is imagine and create brain activity
patterns that you can't do anything.
You're sitting there just completely stationary.
But our intuitions about this might actually be wrong.
When you measure quality of life in patients who in
locked in syndrome patients, the majority say they're happy with
their quality of life even though they're locked in and
this is covered in the baby book.
And so the advantage of then single patient communication is
that a subset of those patients might not be happy,
but there might be very simple things that we can
do to change and be able to communicate with them.
Noninvasively is potentially very important.
Okay.
So just to conclude, so I've said a few of
these, and the last point here is that neuro imaging
may facilitate communication with behaviourally non-responsive patients and allow classification
of peaceful levels of awareness, but this raises deep ethical
issues.
Thanks very much.