Reflection: What can models of visual perception say about phenomenology
09 Jan 2020
Reading time ~11 minutes
<p align=center> Last Update: 1/28/2020 </p>
This chapter has focused on what visual perceptual models offers to the understanding of visual phenomenology. It introduces four computational models of perceptual decision making, each can be categorized as either being static or dynamic, and either using point representations or probabilistic representations of a stimulus. All models have specified a sensory stage and a decision stage, and they can account for a variety of perceptual behavior and neural activities. For each model, there are different possible mappings of objective visual perception and subjective phenomenology. These options show that such mappings are not definite, but constrained. Second, from a methodological perspective, I think they are very helpful for me to better understand the PNAS paper Denison et al. (2018) and the BBS paper, both of which rely somewhat on SDT-based models. According to SDT, people’s perceptual decision can be modeled by an internal response drawn from a Gaussian distribution corresponding to either stimulus present or stimulus absent condition, and the internal response is compared with a criterion to get an output. In the PNAS paper, a very interesting point the authors made is that inferring decision boundaries from behavior in SDT tasks, as some previous studies did, may be flawed because a criterion shift can be made by changes in the internal response distributions as well as by an absolute decision boundary update. By comparing the Bayesian and the Fixed models, they showed that observers tended to shift decision boundaries outward in conditions where they paid less attention to the stimuli. Third, from a theoretical viewpoint, I’m optimistic that the studies on perceptual decision making can shed light on our understanding of phenomenal consciousness. Although vision science, in its origin, is inspired by the curiosity of how we see the world, its development has largely been driven by behavioral and neuroimaging studies assessing people’s objective reaction, decisions, and neural responses, instead of our subjective experiences. However, trying to dig into the subjective side of visual perception is intrinsically difficult. The problem of explanation gap between physically observable things and the non-physical like, inner subjective world afflicted philosophers for thousands of years. However, it seems that more and more the scientific literature is moving towards demystifying the “hard problem” of consciousness. Theories have been proposed that the question is not why consciousness is intractably mysterious, but why it appears to be. To answer this question, research will ultimately go back to the cognitive machinery in our brain, and the “hard” problem of consciousness should end up more trackable. The relationship between visual perception and phenomenology somehow reminds me of attention and awareness. Attention is a more mechanistic, more difficult to be elaborated process of signal enhancement in our brain. Awareness, on the other hand, is deeply rooted in people’s minds and we often assume ourselves as an expert to it. Visual perception no less a complicated process in the brain and there are mysteries remain to be solved, but everyone invited into our labs knows what a visual stimulus looks like to them. This fundamentally hard jump, the jump from the objective and subjective, from information that we can’t access (even in our brain, I guess we can’t directly feel how populations of neurons fire and encode information) to information that raises to our consciousness, seems to share some commonalities. It might be interesting to ask what information processing procedures make consciousness, or any specific subjective experience, emerge. For example, a theory I like a lot is Michael Graziano’s Attention Schema Theory. In a nutshell, the theory proposes that awareness is a schematic model of attention, a description of what the brain is focusing its attention on. What we feel as “subjective” derives from information computed by this model of attention. There are several ways to study this hypothesis. From one direction, we can think about what it predicts. If a model of attention gives us awareness, then people must have a weaker control over attention if we are not aware, and conscious experience will be downgraded if we are not paying attention. From another angle, what exactly is this model of attention? Is it biologically plausible for the brain to compute under this model? Therefore, the studies on how visual perceptual decision making relates to visual phenomenology could be very fruitful in understanding consciousness. We take in much more information from vision than from any other modalities every day, and maybe more consciousness experience in vision as well. For any plausible hypothesis that maps the underlying objective information processing in the brain to subjective experience, vision seems to be the best testbed for these hypotheses. Finally, I wanted to talk about some general points I’ve learned from the chapter and what they could possibly guide my future exploration. Implications for my thesis project. In our lab, we study human’s attentional control behavior. More specifically, we are curious about what people might strategically deploy their attentional control settings to help them find the visual target in a dynamically changing environment. From a modeling perspective, there are quite a lot of interesting things we can attempt. For example, in Irons & Leber (2016), the original ACVS paradigm, participants had to search for a target square among variable distractors. While there were always two targets available, one of them got less and less “optimal” over trials because the variable distractors more and more resembled its color. The crucial measurement is the trial on which the participant switched from one target type to the other, thus maintaining optimality. In the original paper, participants were found to switch significantly later than they expected to be, showing a suboptimal attentional control strategy. It should be interesting if we can model people’s tendency to make adaptive choices when they use mechanisms of attention to guide their search goal in the visual world. For example, for most suboptimal observers, their probability of selecting one type of target can be modeled with a sigmoid going from the plateau trials where that type of target is optimal, to another set of plateau trials where the other type of target becomes optimal. A final, and also less relevant point I take away from reading this book chapter applies generally to reading scientific literature. Often times, especially when I had to read a lot of papers for the first time, I got stuck on some papers for a long time. Maybe there was a concept that I really couldn’t understand, or there was a misunderstanding happened somewhere in the paper. But it was often when I read other papers, some related to it and some less relevant, that I came to understand it all of a sudden. I had often assumed that when I didn’t comprehend, there should be something I didn’t get from the material itself. While this might be true sometimes, it’s more likely that my knowledge structure didn’t perfectly fit into the author’s narrative. So, it encourages me to always search for the citing or cited literature if I can’t understand some part of a paper. This is proved useful again in the case where the book chapter you sent me showed me how your two papers fit into the theoretical framework, and the two papers in turn helped me better understand the technical details covered in the book chapter. Implications for my future studies. I’ve found that having a solid mathematical background would help me in a lot of important ways in graduate school. As a result, no matter what I’m going to focus on in graduate school, I plan to systematically self-teach more advanced math and computer science in spring and summer this year. To summarize, I think this book chapter is an excellent introduction to model-based perceptual decision making studies, which can help us not only understand how human visual system works, but also infer what we might subjectively feel when seeing a visual stimulus. It has also made me think a lot, so I view it as a very valuable read.