Session Detail

Back

Poster session 3, Multifunction Room

Sunday, July 16, 11:15-12:00, 12:30 -13:15

Attention

Visual Attention Differences in the Broader Autism Phenotype

Presentation Number:P3.01 Abstract Number:0128
Alana Cross 1, *, Robin Layckock 2, Sheila Crewther 1
1La Trobe University
2La Trobe University, RMIT University

Visual attention is known to vary across the Broader Autism Phenotype (BAP), although this relationship has not yet been adequately explored. Attentional deficits are associated with the emergence of autism characteristics, including communication and social problems. Fifty children aged between 5-12 years completed The Attention Network Task (ANT) to examine Posner’s attentional networks- alerting, orienting and executive control. Their performance was compared on an Inspection Time (IT) task and a parent-rated Autism Spectrum Quotient (AQ-Child) questionnaire. An overall correlational analysis showed that IT performance was associated with reaction time (RT) for all congruent cued conditions, but not incongruent. Attention to Detail, measured on the AQ-Child, was associated with RT on ANT incongruent cued conditions only. Orienting and Executive Control networks were correlated on the ANT. When split into High and Low AQ groups, there was a group difference in mean accuracy on incongruent cued conditions of the ANT. No group differences were found in RT or congruent conditions. High AQ traits appear to be associated with less flexibility in changing attention. In line with Autism research, Posner’s attentional networks may not be independent of each other in the BAP. Further, parent-reported behavioural traits can relate to phenotypic visual processing characteristics.


 

The influence of invisible local information on the integration of global form and motion coherence.

Presentation Number:P3.02 Abstract Number:0083
Charles Chung 1, *, Sieu Khuu 1
1The University of New South Wales

In this study we examined whether the integration and detection of global form and motion relies on local information that is made progressively made invisible using continuous flash suppression (CFS). Global motion and form coherence thresholds were measured using variants of the Global Dot Motion (GDM) and Glass pattern stimulus in which the signal to noise ratio was varied (using a staircase procedure targeting the 79 % correct performance level) until the pattern can be just detected. The stimulus was presented using a two interval forced choice design in which the task was to indicate the interval containing the global pattern (defined by motion or form). Across conditions, spatial sectors of the stimulus (0, 25, 50, 75 %) was suppressed using CFS, and we determined whether this affected form and motion coherence thresholds. We found that increasing the areas of suppression in the stimulus did not affect coherence thresholds with performance similar to when the entire stimulus was visible to the observer. These findings suggest that visual awareness is not a requirement for form and motion integration, and importantly that unconscious information continue to influence conscious perception.


 

The aging effect on time perception: An ERP study

Presentation Number:P3.03 Abstract Number:0121
Hsing-Hao Lee 1, *, Shulan Hsieh 1
1Department of Psychology, National Cheng Kung University

Time perception function is indispensable in our daily lives. Time perception is influenced by many cognitive functions which are also affected by aging. Current study collected event-related potential (ERP) data while participants were performing a temporal bisection task to investigate the aging effect on time perception. We used contingent negative variation (CNV), as well as the behavior response as the indices of time perception ability. We found that elders’ CNV peak latency was significantly shorter than that of younger adults. Besides, CNV peak amplitude was also smaller for elders than younger adults. Both results indicated that there exists aging effect on time perception function: Elders’ pacemaker operates at the different rate as younger adults’ does. This is the first study using ERP approach to probe into the issue of time perception and aging, which has verified the differences of timing function in younger and older adults through both behavioral and electrophysiological measurements.


 

Visual perception in peripheral visual field is modulated by eccentric gaze

Presentation Number:P3.04 Abstract Number:0081
Ryoichi Nakashima 1, *
1The University of Tokyo

Visual perception can be altered by head direction even when gazing at the same location (Nakashima & Shioiri, 2015), such that perception is facilitated during eccentric gaze when a visual stimulus appears anterior to the direction of the head. The effect of eccentric gaze, where the gaze direction is not aligned from the head direction, on visual perception at central and peripheral vision was investigated. Participants were asked to judge the orientation of “T” presented in central vision (central task) and simultaneously detect a dot appearing in peripheral vision (peripheral task). The main manipulation was gaze direction; frontal vs. eccentric. Results indicated that the performance of the central task did not differ based on gaze condition. Moreover, the peripheral task performance did not differ when the dot appeared near the fixation. In contrast, the effect of gaze was recognized when the dot appeared far from fixation, such that dot detection was superior when it appeared to the left (right) of fixation during a right (left) eccentric gaze. This finding suggests that visual perception is facilitated approximately in a direction anterior to the head. We concluded that eccentric gaze influences visual perception, particularly in peripheral rather than in central vision.


 

Shared and distinct information processing limitations across attentional forms and modalities

Presentation Number:P3.05 Abstract Number:0109
Gwenisha J. Liaw 1, Takashi Obana 1, Tiffany T.Y. Chia 1, Christopher L. Asplund 1, *
1Singapore Institute for Neurotechnology

Selective attention allows us to prioritize which sensory information reaches awareness. In both the visual and auditory domains, attention controlled either voluntarily (goal-directed) or by external events (stimulus-driven) has a dark side: Unattended items are frequently missed. The extent to which these attentional limitations are due to common cognitive mechanisms, however, is not fully understood. In this study, we adopted an individual differences approach to investigate the relationships amongst temporal attentional capacity limitations. The Attentional Blink (AB) indexed goal-directed attentional limitations, whereas Surprise-induced Blindness (SiB) and its auditory analogue Surprise-induced Deafness (SiD; Obana & Asplund, in prep) indexed stimulus-driven ones. Each participant (n=75) was tested twice on each paradigm in each sensory modality, thereby allowing us to calculate cross-task correlations and test-retest reliability. Despite finding strong test-retest reliability and weaker, yet significant, correlations between blink and surprise deficits within modalities, only SiB and SiD were related across modalities. In contrast, visual and auditory blink magnitudes were uncorrelated. We conclude that goal-directed and stimulus-driven attention may be contingent on partially shared capacity limits within modalities. In addition, shared stimulus-driven deficits across modalities may be due to a central cross-modal alerting mechanism.


 

Measuring attentional facilitation related to preparation of hand movements

Presentation Number:P3.06 Abstract Number:0064
Takumi Miura 1, Kazumichi Matsumiya 1, Ichiro Kuriki 1, Satoshi Shioiri 1, *
1Tohoku University

Visual processing is enhanced at the goal of hand movement, suggesting existence of visual attention related to action. We used SSVEP (Steady-State Visual Evoked Potential) to measure temporal profile of the attentional modulation related to hand movements. Two disks flickering independently were presented on the right and left of the fixation and participants attended to either of the two disks to perform an RSVP task. While attending to one disk, he/she was asked to move their hand to the disk indicated by a cue (either the attended or not attended disk) from the initial location right below the fixation. The time course of SSVEP revealed that the amplitude increased at about 500ms before the hand movement and decreased after the start of the movement at the attended disk. No such effect was found at the unattended disk. These results can be interpreted as follows. There is additional attentional modulation at the position to which attention has already been allocated when the position is defined as the goal of hand movement. The attentional effect is shown before the hand movement and disappears during hand movement, perhaps because attention is on the hand in movement rather than the goal of the movement.


 

Spatial compression at peripheral vision without saccades and visual masks

Presentation Number:P3.07 Abstract Number:0105
Masahiko Terao 1, *, Fuminori Ono 1
1Yamaguchi University

It is known that when a stimulus flashed just before saccades, its location appeared to be closer to the saccade target than it actually was. Similarly, it also has been shown that a flashed stimulus was shifted towards a reference when followed by a visual mask. Here we report a novel spatial compression phenomenon in the absence of saccades and visual masks. In our experiment, two discs horizontally separated by 12 degree center-to-center were flashed simultaneously. Each of them was presented for 50 ms at 5 degree above the fixation point. Observers estimated the position of each disk in relation to a subsequently presented probe stimulus. When two disks were presented in the center region, i.e. 0 degree and 12 degree horizontally apart from the fixation point, the perceived position of each disk was close to its actual location. On the other hand, when two disks were presented further from the center, i.e. 12 degree and 24 degree horizontally apart from fixation point, the perceived position of each disk was shifted toward each other. This attraction was not observed when two disks were presented individually. This phenomenon might be explained by the positional averaging at peripheral vision.


 

Perceived depth and accommodation

Presentation Number:P3.08 Abstract Number:0052
Harold Hill 1, *, Trent Koessler 1
1University of Wollongong

Focusing of the eye at close distances, ocular accommodation, potentially provides information about depth. However there is little evidence that this source of information is used by the human visual system. We report three experiments using an illusion of three-dimensional depth reversal, the hollow-face illusion, to investigate the relationship between perceived depth and accommodation. In the first two experiments we found, using laser speckle optometry, that observers accommodate to perceived rather than actual depth when experiencing the illusion despite the availability of closed loop feedback. In the third experiment we found that a sharp pattern of dots expected to facilitate accommodation was more effective in resolving depth than the equivalent pattern of blurred dots. This is consistent with accommodation influencing, as well as being influenced by, perceived depth. Both observations held for monocular as well as binocular viewing ruling out the possibility that they were driven by binocular vergence. We interpret the findings as evidence that accommodation is closely linked to perceived depth in a situation where multiple other sources of information about depth are available even if it does not provide reliable information about depth in isolation.


 

A linear mathematical model of attentional modulation in visual system

Presentation Number:P3.09 Abstract Number:0096
Akihiro MASAOKA 1, Takeshi KOHAMA 2, *
1Graduate School of Biology-Oriented Science and Technology, Kindai University, Japan
2The faculty of Biology-Oriented Science and Technology, Kindai University, Japan

The visual system obtains information from surrounding environment by moving eyes. Some meaningful information among them are preferentially processed to understand the situation and to decide the next action. This higher order brain function is called the visual attention. Lanyon & Denham (2004) proposed a mathematical model of functional network of the visual area V4, the inferior temporal cortex (IT), the lateral intraparietal cortex (LIP), and the prefrontal cortex (PF). This model can replicate the modulation of visual attention on the responses of network and to perform visual search. However, it is difficult to extend and control this model because of its strong non-linearity. In this study, we propose a mathematical model which is basically described by linear connection of the visual area V1, V4, IT, LIP and PF. The strengths of connection among cortexes are restricted in certain range to avoid the infinitely increasing the level of neural activity. The simulation results show that the proposed model is able to replicate the responses of typical V1 neurons, and the behavior of the model is quite stable because of reduction of non-linearity. This imply that our model is more valid for elucidating the neural mechanism of the visual attention system.


 

The Neural Activity for Reloading vs. Uploading Conscious Representations during Motion-induced Blindness

Presentation Number:P3.10 Abstract Number:0040
Li-Ting Tsai 1, Hsin-Mei Sun 2, Rufin VanRullen 3,4, Chien-Te Wu 5, 6, *
1Taiwan Association for Visual Rehabilitation, Taipei, Taiwan
2Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, USA
3Université de Toulouse, CerCo, Université Paul Sabatier, Toulouse, France
4CNRS, UMR 5549, Faculté de Médecine de Purpan, Toulouse, France
5School of Occupational Therapy, College of Medicine, National Taiwan University, Taiwan
6Department of Psychiatry, National Taiwan University Hospital, College of Medicine, National Taiwan University, Taiwan

We previously reported an intriguing illusory temporal reversal whereby a new stimulus onset (e.g. a dot flash) presented during motion-induced blindness (MIB) triggers an early reappearance of the “perceptually disappeared” target, yet is systematically perceived as occurring after the target reappearance. This illusion implies that the unconscious target representation can be quickly reactivated, with a temporal advantage for its conscious reloading as compared to the conscious uploading of a newly presented visual stimulus. However, the neural correlates behind perceptually reloading an unconscious representation during MIB remain unclear. To address this question, we recorded EEGs while participants (N = 23) performed a modified MIB task in which a probe was presented during a typical MIB condition or during a no-MIB condition. We compared the event related potentials (ERP) and event related spectrum perturbation (ERSP) time-locked to the probe onset during the typical MIB condition (perceptual reloading) and the no-MIB condition (no perceptual uploading). Our results showed that perceptual reloading was accompanied by a significant increases of high alpha and low beta powers beforehand and a significant increase in ERP amplitudes at 150 ~ 250 ms over the parietal-occipital sites afterward.


 

Attentional Capture is affected by Upright or Inverted V-shape

Presentation Number:P3.11 Abstract Number:0086
Po-Pin Lin 1, Yang-Ming Huang 2, *
1NCKU
2FJU

Previous studies indicated V-shape is perceived as threatening, which captures attention in a short time. In this study, the effect of attentional capture elicited by V-shape in the background of emotional facial expression was examined. The mouth of face background is devised of V-shape: the upright V-shape is happy, and the inverted V-shape mouth is sad. The target (upright or inverted V-shape) is placed on the mouth. In experiments 1 to 3, subjects were asked to determine whether the target is upright V-shape or inverted V-shape; the background is upright facial expression in experiment 1, the background of inverted face was used in experiment 2, and both faces used in experiment 3. In experiment 4, subjects were asked to determine whether the target and the V-shape of background are of congruent orientation or not. The results revealed that background influences task performance, whether the backgrounds are relevant to task; in experiments 1 to 3, the effect is caused by V-shape of background, and the effect is caused by emotional feature in experiment 4. To sum up, subjects usually are more attracted by backgrounds than by targets, and it’s a complex attentional mechanism when a few visual V-shapes are presented simultaneously.


 

Attention-modulated Interactions between Statistical Summary Perception & Statistical Learning

Presentation Number:P3.12 Abstract Number:0034
Wen Tai 1, Tsung-Ren Huang 1, *
1Department of Psychology, National Taiwan University

To efficiently process overwhelming information from viewing, human visual system can not only compute summary statistics of a scene (e.g., mean size of objects) but also learn statistical regularities in that scene. However, these two automatic, statistical processes have been reported to interfere each other (Zhao, Ngo, McKendrick, & Turk-Browne, 2011, Psychological Science) and the cause of such interference is not entirely clear yet. Here we propose that the observed interference is resulted from a conflict between relatively distributed spatial attention demanded by statistical summary perception and relatively localized spatial attention demanded by statistical learning. We implemented a computational model to illustrate that distributed attention for statistical summary perception could impair statistical learning of local regularities, which, once learned, could capture attention and thus bias estimates of global summary statistics incorrectly toward local statistics. Our computer simulations successfully replicated findings in the statistical learning literature and various mutual interference phenomena reported by Zhao et al. (2011). The proposed model offers insight into how attention may mediate both statistical processes and its prediction—no interference between statistical summary perception and statistical learning of global scene regularities—has been confirmed by our experiment.


 

Unconscious perceptual grouping modulated by top-down attention

Presentation Number:P3.13 Abstract Number:0080
Shih-Yu Lo 1, *
1National Chiao Tung University

Some researchers suggested that perceptual grouping could be modulated by attention, whereas other researchers suggested that perceptual grouping is an automatic process that does not require consciousness. In this presentation, I integrate the two lines of research, and demonstrate that perceptual grouping can be modulated by attention, but this modulation effect takes place unconsciously. The participants were presented with a display that contained two central horizontal lines, while a railway-shaped grouping pattern defined by color similarity was presented in the background that normally induced a Ponzo illusion. The task was to judge the relative lengths of two centrally presented horizontal bars. Although the participants were unaware of the railway-shaped grouping pattern in the background, their line-length judgment was nonetheless biased by it. More importantly, this unconscious biasing effect was more pronounced when the railway-shaped grouping pattern was formed by the attended color than an unattended color, indicating an attentional modulation effect on perceptual grouping without consciousness. Also, the attentional modulation effect was dynamic, being significant with a short presentation time but not with a longer one. A model that dissociates the effects of attention and consciousness will be proposed to integrate the results in this presentation.


 

Predicting direction of motion in depth by a model with lateral motion detectors

Presentation Number:P3.14 Abstract Number:0020
WEI WU 1, *, Kazumichi Matsumiya 1, Ichiro Kuriki 1, Satoshi Shioiri 1
1Tohoku University

Perception and estimation of motion direction in depth is precise and human can easily catch or avoid a ball approaching the head. Inter-ocular velocity difference (IOVD) is a cue to motion in depth (MID) and it has been shown psychophysically that there is a visual mechanism to analyze IOVD for motion in depth perception (e.g.,Shioiri et at., 2000). A couple of physiological studies revealed that there exist neurons tuned to IOVD signals in monkey MT (Czuba et al., 2014; Sanada & DeAngelis, 2014). We proposed a model of IOVD motion in depth, considering psychophysical and physiological studies in the literature. In order to realize direction selectivity motion in depth, we assume several MID detectors with different tunings in directions in depth. The direction selectivity of motion in depth is required to catch/avoid an approaching ball and there are psychophysical and physiological support (Beverly & Regan, 1975; Czuba et al., 2014). Each MID detector has inputs from lateral motion detectors (LMDs) of two eyes and the direction tuning is built based on speed selectivity of LMD. Simulation showed that the model can predict psychological results about motion in depth to some extent.


 

The effect of attentional focus on motor learning in a mirror drawing task

Presentation Number:P3.15 Abstract Number:0074
Shi-Sheng Chen 1, Li Jingling 1, *
1China Medical University

Previous studies showed that external focus, e.g., focus of attention on the consequence of actions, can enhance motor learning. Nevertheless, whether the same locus of attention works on novice is still under debates. This study aims to introduce the mirror drawing task to test the effect of attentional focus. The mirror drawing task is unfamiliar to most people in daily life and is heavily rely on eye-hand coordination, which is a perfect task to ensure novice experience. Two groups of participants (n = 20), one for external focus and the other for internal focus, are recruited. They are all right-handed (according to Edinburgh Handedness Inventory) males, aged between 20 to 25 years old. The instruction was delivered to ensure the participants use external or internal focus of attention during mirror drawing. Results showed that the internal group have longer first trial (140.06 second) than that of the external group (100.74 second), while both groups showed significantly improvement at the second trial (120.40 versus 71.26 seconds). Our data suggested that the external focus improved motor learning better than internal focus even when the participants are novice to the task.


 

Multisensory Perception

Shift of visual attention to the illusory hand location

Presentation Number:P3.16 Abstract Number:0050
Moe Nonomura 1, Chia-huei Tseng 1, Kazumichi Matsumiya 1, Ichiro Kuriki 1, Satoshi Shioiri 1, *
1Tohoku University

Recent studies have suggested the enhancement of visual processing in peri-personal space. We examined whether attention directs to the illusionary hand location in the condition where subjects perceived their hand illusory at a location different from real hand, using a disappearing hand trick (DHT, Newport et al, 2011). We used flash-lag effect (FLE) to measure attentional effect. FLE is the effect that a flash stimulus aligned with a moving stimulus is perceived as displayed behind and FLE are affected by attention (Shioiri et al, 2010). The disappearing hand trick is a virtual reality technique to produce perceptual misregistration between visual and proprioceptive information of one’s hand. If the misregistration influences visual attention, attentional modulation on FLE would be found around the perceived hand rather than real hand. In addition to illusory condition, there was a control condition, where no illusion was used and subjects perceived their hand at the actual hand location. We found that difference of the attentional peak between the two conditions are correlated with the shift of hand location by DHT. That is, visual attention is shifted to the location of illusory hand, suggesting that visual attention around peri-personal space is multimodal phenomenon.


 

Neural correlates of sound-induced visual experience in acquired auditor-visual synesthesia

Presentation Number:P3.17 Abstract Number:0131
Zixin Yong 1, *, Po-Jang Hsieh 1, Dan Milea 2
1Duke-NUS Medical School
2Singapore National Eye Center

Auditory-visual synesthesia (AVS) could be acquired by a small number of patients suffered from visual impairments. The visual experiences evoked by auditory stimulations are often simple but pronounced (e.g. phosphenes). Despite some attempts to discern the neural correlates of the sound-induced visual experiences in acquired AVS, the exact brain regions that are involved remained elusive. In this study, we used fMRI to investigate this problem in an acquired AVS patient. During fMRI scan, pure tones of various pitches were presented to the patient, who was required to report the appearance of phosphenes by pressing one of two buttons for yes/no response. Besides response-related motor area activations, bilateral primary and secondary visual cortex activations were observed by contrasting phosphene trials with non-phosphene trials. In a control fMRI experiment, one blindfolded healthy participant was asked to signal the presence of light being flashed toward his face together with some tones. He was told that the light would be flashed in some trials, but in reality, no light was flashed. In this case, only motor area activation was observed by contrasting yes/no trials. Our results demonstrate that sound-induced visual experience in acquired AVS was correlated with bilateral primary and secondary visual cortex activation.


 

The different effects of visual perceptual grouping on the fission and fusion illusions

Presentation Number:P3.18 Abstract Number:0035
Riku Asaoka 1, *, Yasuhiro Takeshima 2
1Tohoku University
2Doshisha University

A single flash paired with two auditory beeps is often perceived as a double flash while a double flash paired with one auditory beep is often perceived as a single flash. The former and latter phenomena are called the fission and fusion illusions, respectively. Previous studies showed that intramodal perceptual grouping interfered with intermodal perceptual grouping, resulting in diminished or reduced crossmodal effects. It remains unclear how visual perceptual grouping affects the fission and fusion illusions. The present study examined this issue using audiovisual inducer stimuli as incongruent cues. The participants were presented with the target flashes, visual inducers, and auditory inducers, and the number of these stimuli varied across trials. The task was to report their perceived number of the target flashes while ignoring the visual and auditory inducers. The results indicated that the fission illusion did not occur when a single target flash was accompanied by one visual and two auditory inducers. However, the fusion illusion occurred even when double target flashes were presented along with one auditory and two visual inducers. These results indicated that visual perceptual grouping reduced the occurrence rates of the fission illusion, but not the fusion illusion.


 

Sensation transference from plateware to food: The sounds and tastes of plates

Presentation Number:P3.19 Abstract Number:0127
Yi-Chuan Chen 1, *, Andy Woods 1, Charles Spence 1
1University of Oxford

Two experiments were designed to extend the well-known Bouba/Kiki effect to the case of an unusual set of commercially-produced plateware, and further to assess the influences of these plates on the expected taste of a dessert based on the theory of crossmodal correspondences. The results show that plates having a smoother circumference are more likely to be matched to “Bouba”, while those with a pointier circumference are more likely to be matched to “Kiki” instead, thus demonstrating the typical Bouba/Kiki effect. Both the shape and colour of the plates modulated people’s ratings of the expected taste and liking of the dessert displayed on them. Specifically, the colour of the plate induced a general effect on taste expectations that was consistent with the white-sweet and black-bitter associations. The shape of the plate modulated the rating of expected liking for the chocolate ice-cream, and the expected sweetness of the lemon sorbet, respectively. Finally, colour and shape conjointly modulated the expected sourness of the lemon sorbet. The insights from such results are relevant to optimizing the visual appearance of specific dishes in restaurants and on product packaging.


 

Self-Motion Perception Induced by Visual Motion without Luminance Modulation

Presentation Number:P3.20 Abstract Number:0016
Shinji Nakamura 1, *
1Nihon Fukushi University

Uniform motion of large visual display which mostly occupies observer’s field of view can induce illusory self-motion perception toward the opposite direction (visually induced self-motion perception, also known as vection). Vection researches have indicated that effective vection induction might require luminance modulation in motion display, and visual motion without luminance modulation (so-called second order motion, such as contrast-defined or motion-defined motions) could not induce vection at all, or could induce only a weak vection. The present study investigated possibilities of vection induction using non-luminance based motion, using yet another type of motion perception, i.e., orientation defined motion (fractal rotation; Benton et al., 2007). Psychophysical experiment in which 13 undergraduate observers were participated indicated that fractal rotation can induce illusory self-rotation (roll vection) with considerable strength, though it was a bit weaker than in a case of luminance defined visual rotation. The results suggest that luminance modulation in visual motion is not essential for effective induction of self-motion perception.


 

Dissociating the roles of background color and ipRGCs on audiovisual integration

Presentation Number:P3.21 Abstract Number:0116
I-tan Weng 1, Yi-Chuan Chen 2, Li Chu 3, Akiko Matsumoto 4, Wakayo Yamashita 4, Sei-ichi Tsujimura 4, Su-Ling Yeh 1, 5, *
1Department of Psychology, National Taiwan University, Taipei, Taiwan
2Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
3Department of Psychology, The Chinese University of Hong Kong, Hong Kong, China
4Department of Information Science and Biomedical Engineering, Kagoshima University, Kagoshima, Japan
5Neurobiology and Cognitive Neuroscience Center, National Taiwan University, Taipei, Taiwan


Superior colliculus has been demonstrated as a site of early multisensory integration in animal studies; it also receives inputs from intrinsically photosensitive retinal ganglion cells (ipRGCs) that is most sensitive to blue-light, peaking at 480nm. We examine whether signals from ipRGCs modulate human multisensory integration by comparing behavioral performance under different visual backgrounds while keeping the luminance and the color constant. In the simultaneity judgement task, a flash and a beep were presented at various SOAs, and participants had to report whether the two stimuli were presented simultaneously. Participants were better in discriminating audiovisual simultaneity at 100nm in the visual-leading condition when the background was blue (higher ipRGC stimulations) than orange (lower ipRGC stimulations) (Experiment 1). Nevertheless, when the levels of ipRGC stimulation were manipulated by either presenting the backgrounds through filter lens to reduce the ipRGC stimulation (Experiment 2) or a multi-primary projector system (Experiment 3) to increase the ipRGC stimulation while the background colors were controlled constant using metamers, there was no modulatory effect on audiovisual simultaneity judgments. The modulation of blue light on the precision of human audiovisual simultaneity perception is likely associated with higher levels of visual processing rather than the direct inputs from ipRGCs.


 

The influence of sound on visual global motion directional discrimination: An equivalent noise approach

Presentation Number:P3.22 Abstract Number:0125
Ang-Ke Ku 1, Pi-Chun Huang 1, *
1National Cheng Kung University, Tainan, Taiwan

Information from different sensory modalities are processed simultaneously and influenced by each other to help people interpret the environment. In this study, we focused on the audiovisual interactions in motion-integration processing. We used the equivalent noise paradigm to investigate how sound influences the global motion discrimination thresholds and detangled whether sound influences the precision of detecting the local motion direction (internal noise), the ability to pool these local motion signals across space (sampling efficiency), or both. The visual stimuli consisted of 100 dots, and we sampled the moving directions from a normal distribution with five levels of standard deviation (external noise). The observers discriminated the direction of the global motion under four conditions (absent, stationary, congruent, and incongruent sound). The psychometric function showed the directional sound bias in the observers’ responses by changing the guess rate. It showed that the thresholds increased with the levels of standard deviation whereas they were the same under the four sound conditions, which indicates that uninformative or informative sound did not influence the observers’ motion discrimination abilities. In conclusion, sound influenced neither the internal noise nor the sampling efficiency, but it influenced the directional-sound response bias on the decision level.


 

Approaching auditory trees make wooden sticks feel shorter

Presentation Number:P3.23 Abstract Number:0112
Maiko Uesaki 1, *, Hiroshi Ashida 2, Akiyoshi Kitaoka 1, Achille Pasqualotto 3
1Ritsumeikan University
2Kyoto Univeristy
3Sabanci University

Increase in the retinal size of stationary objects in the environment is one of the cues to the observer’s forward motion. Here, a series of six images, each comprising a pair of dark pine-tree figures against a light background, were translated into the auditory modality using the vOICe, software developed to assist the blind by converting visual scenes to sounds. Resulting auditory stimuli were either presented in a sequence (i.e. increasing in intensity and bandwidth conveying a pair of pine trees becoming larger in the visual field) or in a scrambled order. During the presentation of the auditory stimuli, blindfolded participants held one of the three wooden sticks of varying lengths at a time in their hands, to estimate its length by free haptic exploration. Results showed that participants who listened to the auditory stimuli in a sequence, indicative of the listener’s motion towards the objects, underestimated the lengths of the sticks. The consistent underestimation observed in this study may be due to a mechanism similar to that underlying moving size-contrast illusions where an object surrounded by others increasing in size is perceived to be smaller than that surrounded by others of a constant size.


 

The validity of facial and vocal cues: Testing the backup signal hypothesis.

Presentation Number:P3.24 Abstract Number:0104
Zhi-Yun Liu 1, Wei-Lun Chou 1, *
1Department of Psychology, Fo Guang University

Faces and voices may offer backup signals or multiple messages. We examined this debate by correlating perceived facial and vocal attractiveness in men and women. We also investigated whether facial and vocal cues are valid for raters to judge physical data of the models who provided photos as well as voices. We photographed and recorded 25 women and 25 men speaking five vowels. Standardized facial pictures and vocal samples were rated for attractiveness, height, body size, masculinity/femininity, and health by 64 participants. We found that the participants can accurately determine the height of the owner of a face and a voice. However, only the facial information but not the vocal information can be used to judge body size accurately. More importantly, the results showed that participants make similar judgments from photos and voices, with particularly strong correlations for height, body size, and masculinity/femininity. Moreover, visual and vocal attractiveness were found to positively correlate when men rated women. These results are interpreted as being consistent with the backup signal hypothesis.


 

Approaching sounds dilate perceived time

Presentation Number:P3.25 Abstract Number:0102
Achille Pasqualotto 1, *
1FACULTY OF ARTS AND SOCIAL SCIENCES, SABANCI UNIVERSITY

Literature reports numerous examples of the effect of moving visual stimuli on time estimation. Here we investigated the effect of auditory stimuli. Auditory stimuli were rendered by using the vOICe, a sensory substitution software converting visual images into the equivalent auditory ‘images’. We rendered: the sound of an approaching object, the sound of an object moving away from the listener, and a ‘scrambled’ version of the previous two stimuli (baseline condition). These auditory stimuli were repeatedly played to blindfolded participants and represented the ‘background’ of the task; the ‘main task’ consisted of estimating the duration of target sounds. Target sounds were 300 Hz pure sounds, thus clearly distinguishable from the background. We found that, when participants were listening to the sound of approaching objects, they overestimated the duration of the target sounds. In other words, the sound of approaching objects dilated the perceived time. This bias can be interpreted as an evolutionary advantage, because overestimating time reduces the perceived distance between the listener and an approaching object, thus prompting faster behavioural responses.


 

Social Interaction and Preference

Transcranial direct current stimulation over the medial prefrontal cortex affects the subjective experience of beauty

Presentation Number:P3.26 Abstract Number:0085
Koyo Nakamura 1, Hideaki Kawabata 2, *
1Waseda University
2Keio University

Neuroaesthetics is concerned with the biological bases of the subjective experience of beauty. Neuroimaging studies have revealed that neural activity in the medial prefrontal cortex (mPFC) correlates with the subjective experience of visual beauty. However, correlational studies are poorly suited for demonstrating the causal relationship between subjective beauty and its neural underpinnings. To investigate the causal role of the mPFC on aesthetic appreciation, we applied transcranial direct current stimulation (tDCS) and examined whether non-invasive brain stimulation modulates aesthetic appreciation of abstract artworks. In the experiment, participants rated the subjective beauty and ugliness of abstract paintings on a 9-point rating scale before and after the application of tDCS on the mPFC. The application of cathodal tDCS over the mPFC, which induced temporal inhibition of neural excitability in the region, led to a decrease in beauty ratings but not ugliness ratings, while application of sham stimulation over the mPFC did not impact beauty and ugliness ratings. The results of our experiment indicate that the mPFC has a causal role in generating the subjective experience of beauty.


 

Neuro-behavioral Assessment of Visual Performance and Discomfort in High Luminance Displays

Presentation Number:P3.27 Abstract Number:0134
Shun-nan Yang 1, *, Ju Liu 1, Manho Jang 2
1Vision Performance Institute, Pacific University College of Optometry
2DON Silicon Valley R/D Center

Excessive luminance bleaches photoreceptors and overstimulates the primary visual cortex, which can lead to reduced visual sensitivity and increased discomfort. Images rendered with high dynamic range (HDR) methods can expand the luminance distribution shown on digital displays. Temporally, the dynamic luminance change in images can be exacerbated by image flickering. The present study investigated how temporally modulated luminance change impedes visual processing and affects viewing comfort. Participants with normal vision viewed blocks of trials alternating between a homogenous luminance circle and a suprathreshold grating pattern. They were asked to identify the direction of grating as soon as possible while their EEG signals were recorded. The circle luminance was identical within each block of trials and randomized across blocks. Results show a positive correlation between luminance level and viewing symptoms, and a negative one between luminance and event-related VEP amplitude and discrimination accuracy. Chromatic luminance stimuli revealed that such sensitivity was specific to particular color opponency pathways, and varied individually. These findings suggest a luminance threshold at around 650 nits in displaying dynamic image with which the visual processing is not significant impeded where the visual abilities are preserved. Such a threshold varies dependently on individual differences in luminance and chromatic sensitivity.


 

Landscape preference in Taiwanese school-aged children

Presentation Number:P3.28 Abstract Number:0022
Chien Kai Chang 1, Shu-Fei Yang 2, Li-Chih Ho 3, Hui-Lin Chien 4, *
1Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan
2Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taichung, Taiwan
3Department of Environmental and Hazards-Resistant Design Huafan University, New Taipei City, Taiwan
4Graduate Institute of Biomedical Sciences, Graduate Institute of Neural and Cognitive Sciences, China Medical University, Taichung, Taiwan

We are fond of beautiful scenery, but not all types of scenery are equally fascinating. A recent study using visual signal computational model to predict landscape preference discovered that Taiwanese young adults showed a higher preference for natural scenes than urban scenes. The present study aimed to explore the landscape preference in Taiwanese school-aged children (5- to 12-year-old) using the same image database. In this study, each participant received 80 pictures containing four natural scene (coasts, forests, countrysides, mountains views) and four urban scene (highways, tall buildings, streets, inner cities) types, 10 for each type. There were six different sets of 80 pictures from the 480-picture image database (Ho et al., 2015). The child participants were asked to rate their preference for each picture from one (strongly disliked) to five (strongly liked). We found that Taiwanese children showed a significantly higher preference for natural scenes than urban scenes, and their preference of the coast scenes was the highest among all types. The present study revealed that, like adults, Taiwanese children exhibited a stronger preference for natural scenes than urban scenes, which supports the prospect-refuge theory that natural scenes simultaneously provide abundance and a sense of security to meet human needs.


 

The salient partner: identity-referential saliency evoked by physical presence

Presentation Number:P3.29 Abstract Number:0110
Miao Cheng 1, *, Chia-huei Tseng 2
1University of Hong Kong
2Research Institute of Electrical Communication, Tohoku University

Neutral information enjoys prioritized processing when associated with self or significant others. However, it remains unclear what contributes to identity-referential saliency. We examined whether familiarity was necessary to create identify-related advantage by introducing a stranger as partner. Participants associated 3 geometric shapes with own, partner’s and stranger’s names, and reported whether name-shape parings correctly matched. We misguided participants to believe that after individual condition, they would perform the task together with their partner, while in reality all participants only performed the task individually. In Experiment 1, each participant met his/her assigned partner briefly without further communication; while in Experiment 2, the partner never appeared physically. Consistent with previous studies, self-related trials received processing advantage (higher accuracy, shorter response time) than partner- and stranger-related trials in both experiments. More importantly, trials related to a partner’s name also receive similar advantage than those related to a stranger’s name in Experiment 1, but this partner-advantage disappeared in Experiment 2. This novel discovery suggested that identity referential saliency can be quickly built up towards a stranger without prior familiarity, and physical presence is a substantial contributor. Our study has theoretical implication for understanding the nature of identity referential saliency and disassociating self- and other-advantage.