P1.1 The prevalence of between-hands spatial codes in a tactile Simon task
Gherri, E., & Theodoropoulos, N. University of Edinburgh
Show abstract
When a tactile stimulus is presented to our body, its spatial location is automatically coded, modulating behavioural performance, even when space is completely task-irrelevant (Tactile Simon effect). Here we present a series of studies investigating whether multiple spatial codes are created for the location of tactile stimuli in a tactile Simon task. In the two hands task (Exp. 1 and 3), in which stimuli were presented to one of four possible locations (left and right finger on the left and right hand), the tactile target was automatically coded according to the location of the stimulated hand (between-hands Simon effect) but not according to the location of the stimulated finger (within-hand Simon effect). By contrast, a reliable within-hand Simon effect was observed in the one hand task (Exp. 2 and 3), when tactile stimuli were presented to one of two possible locations on the same hand. Results reveal that tactile stimuli to the fingers are initially encoded relative to their between-hands location. Only when this code is not present or becomes weak, it is possible to observe a within-hand Simon effect. Thus, unlike the visual Simon effect (on the horizontal dimension) in which multiple spatial codes are simultaneously used to encode the locations of stimuli, the tactile Simon effect is primarily based on a single spatial code.
Hide abstract
P1.2 Neural underpinnings of audio-visual integration in the Pip and Pop effect
Fleming, J.T., Noyce, A.L. & Shinn-Cunningham, B.G. Harvard University
Show abstract
During visual search through a dynamic display, reaction times are reduced if a tone is synchronized with changes to a property of a visual target, such as its color. This phenomenon, termed the Pip and Pop effect (Van der Burg et al., 2008), has been demonstrated in multiple behavioral studies, but its neural underpinnings have not been fully explored. In one electroencephalography (EEG) study using a similar paradigm, early multisensory interactions were found most strongly over left visual cortex (Van der Burg et al., 2011). Here, we performed EEG while participants did the original Pip and Pop task, with one of two modifications. In Experiment 1, the synchronous tone could either come from a central loudspeaker – spatially congruent with the visual display – or a lateral speaker at a spatial separation of 90 degrees. In Experiment 2, we manipulated the temporal congruence between the tones and visual target changes. Experiment 1 showed that reaction times improved when a temporally synchronous tone was present, regardless of spatial congruence. In addition, event-related potentials (ERPs) evoked by audio-visual events were significantly enhanced relative to the sum of their unisensory parts. Cluster-based permutation testing revealed that this effect occurred between 100ms and 300ms post-stimulus, with a broad distribution across the scalp. The multisensory enhancement was also significantly stronger for the last audio-visual event preceding target detection as compared to the other stimuli in the trial. Mirroring the behavioral results, these audio-visual effects were present regardless of whether the auditory and visual stimuli were spatially congruent. On the other hand, Experiment 2 demonstrated that reaction time effects were highly sensitive to temporal synchrony, consistent with the effect being subject to temporal windows of audio-visual integration.Taken together, these results demonstrate the key importance of temporal synchrony in facilitating audio-visual integration, with spatial congruence playing a more modest role.
Hide abstract
P1.3 Gender difference of a stroking person influences rubber hand illusion according to autistic traits
Tsuboi, K., Fukui, T. Graduate School of System Design, Tokyo Metropolitan University
Show abstract
The aim of this study is to investigate 1) whether the strength of rubber hand illusion is modulated by a person who is stroking both participant’s hand and a fake one synchronously and 2) whether each participant’s autistic traits are related to the strength of the illusion. Three conditions were tested: 1) a partner, 2) unknown female, and 3) unknown male stroked a male participant’s hand and the rubber hand synchronously. After each condition, proprioceptive drift (PD) and the illusion questionnaire score were recorded. Furthermore, participants were required to answer the Autism Spectrum Quotient (AQ) test before the experiment. No significant differences among three conditions were found in neither PD nor illusion score, although the illusion itself was induced in all conditions. But, we found a significant correlation between illusion score and AQ score in the unknown female condition while no significant correlation was found in the other two conditions. Specifically, higher AQ score was associated with less feeling of ownership over the rubber hand only when a stroking person is unknown female. This result suggests the subjective experience of the feeling of ownership could be modulated by gender of a stroking person and individual autistic traits.
Hide abstract
P1.4 The role of semantic congruency and awareness in spatial ventriloquism
Delong, P. & Noppeney, U. Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
Show abstract
The extent to which signals from different senses can interact in the absence of awareness is controversial. Models of global workspace predict that unaware signals are confined to processing in low-level sensory areas and thereby are prevented from interacting with signals from other senses in higher order association areas. Previous research has shown that semantically congruent sounds can increase visibility of visual stimuli obliterated from awareness. Here we investigated whether unaware images can influence spatial perception of consciously perceived sounds and whether this spatial ventriloquism could be affected by audio-visual semantic correspondence.
Pairs of semantically congruent or incongruent audio-visual stimuli were presented in spatial ventriloquist paradigm. In the first experiment participants performed simple sound localization. In the second experiment we applied sandwich masking to modulate visual awareness and subjects also reported image visibility and semantic category.
We observed robust modulation of spatial ventriloquism by semantic congruency, but only in the experiment without masking. In the sandwich masking paradigm, sound localization accuracy was still affected by spatial congruency (ventriloquist illusion). However, semantic correspondence did not affect spatial ventriloquism, even for visible trials. At the same time picture identification accuracy and percentage of visible images were significantly higher for semantically congruent stimuli.
These results suggest that audio-visual integration is not affected by semantic correspondence between sound and image, when the latter is not consciously perceived. As in previous studies, we observed an impact of semantic congruency on visual perception, however this could be explained by semantic priming (which has been shown for stimuli within single sensory modality) and not necessarily multisensory interactions.
Acknowledgments: This research was funded by the European Research Council (ERC-multsens).
Hide abstract
P1.5 A Pair of Ambiguous Visual Stimuli Improves Auditory Spatial Discrimination
Cappelloni, M.S., Shivkumar, S., Haefner, R.M. & Maddox, R.K. University of Rochester
Show abstract
Studies of the audio-visual ventriloquist effect show that concurrent presentation of a visual and auditory stimulus from nearby locations can lead to visual “capture” of the auditory location. This is typically described as a bias, but possible refinements of auditory localization by a collocated visual stimulus have not typically been the focus of experiments, nor have situations in which there are multiple auditory and visual stimuli. Here we tested both of these notions, with results pointing to a “double ventriloquist effect” in which presentation of two task-uninformative visual stimuli refines the location estimates of two simultaneous auditory stimuli, leading to improved discrimination.
We presented listeners simultaneously with two symmetrically-lateralized auditory stimuli (a tone complex and a pink noise token) and two visual stimuli of per-trial-randomized shape and color that could not be associated with either auditory stimulus, and asked subjects to report the side of the tone. A range of auditory separations was tested, while the visual locations either matched the auditory ones or were both central. Most subjects showed improvements in their auditory spatial discrimination thresholds in the matched visual condition, even though the visual stimuli provided no task-relevant information.
The behavioral data are well fit by a Bayesian model that does not assume exact, but approximate inference. Though we only tested visual stimuli centrally or at the veridical auditory azimuth, the model predicts auditory discrimination could be further improved by visual stimuli at exaggerated azimuths. When visual stimuli are farther apart than auditory stimuli, they also bias the two location estimates in opposite directions away from the midline. This may indicate that in a complex scene, the ventriloquist effect can result in independently biasing the perceived locations of multiple auditory stimuli towards the nearest plausible visual targets.
Hide abstract
P1.6 The Dynamic Double Flash Illusion: Auditory Triggered Replay of Illusory Visual Expansion
Stiles, N R.B., Tanguay Jr., A .R. & Shimojo, S. University of Southern California
Show abstract
In the classic double flash illusion, a real visual flash accompanied by two auditory tones (beeps) is followed by an illusory flash when one beep is simultaneous with the real flash and the other follows shortly thereafter. In particular, when a visual circle is flashed in sync with a beep, and then a second beep is played, an illusory circle is perceived to be flashed in sync with the second beep. The illusory flash has been previously shown to be triggered by the second auditory beep.
Our investigation extends the double flash illusion by showing that not only can an on-off flash be duplicated by this paradigm, but also that an illusory expansion (induced by the flash of a circular brightness gradient) can be triggered to replay when paired with two beeps. We hypothesize that this illusory expansion replay (or dynamic double flash) could be caused by the reactivation of subconscious patterns in early visual cortex generated by recent visual stimuli (similar to Transcranial Magnetic Stimulation (TMS) “replay”, but here activated crossmodally by sound). The perception of the dynamic double flash further supports the interpretation of the illusory flash as similar in its spatial and temporal properties to the perception of the real visual flash, in this case by replicating the illusory expansion of the real flash.
In a second experiment, we show further that if a circular gradient stimulus (generating illusory expansion) and a sharp-edged circle are presented simultaneously side-by-side with two beeps, one synchronous with the stimulus and one following, in some cases only one visual stimulus or the other will double flash. This observation indicates that the double flash illusion can be used as a tool to study differential auditory-visual binding by recording whether a given visual stimulus double flashes within a pair of synchronously presented stimuli.
Hide abstract
P1.7 Brightness-mass matchings in adults’ reasoning of physical events
Sanal, N., Bremner, J.G. & Walker, P. Lancaster University
Show abstract
Previous research suggests that adults make cross-modal matchings between brightness of objects and perceived heaviness. Adults judge darker objects to be heavier in weight than light coloured objects (Walker, 2012). It is unknown whether these matchings are considered in adults’ judgements of physical events. Infants first start to make inferences about mass through size and distance relations in simple collision events about 5.5-6.5 months of age (Kotovsky, & Baillargeon, 1998). They anticipate a greater displacement of a standard object after collision with a large object and a lesser displacement after collision with a small object. Given the infant evidence, how adults would reason about the same events is unknown especially if the objects are of same sizes but vary in colour. The present study examined the brightness-mass relationship in 24 adults using 2D computer-animated collision events. Adults were first shown a reference event in which a grey billiard ball (Ø=60) rolled down a ramp and hit a grey cube (W= 95, H= 95) to the midpoint of screen. Next, adults saw four test events in random order, a white or black billiard ball (Ø=60) moved the grey cube either to before (i.e. a short distance) or after the midpoint (i.e. a longer distance) on the screen. Adults were first asked to rate the test events on how real they were (part A) and later in comparison to the reference event (part B). Adults thought it more likely for the white ball to move the cube a short distance than a longer distance in part A. In part B, adults judged it to be more likely for the white ball to move the cube to a short distance and black ball to move the cube to a longer distance. In conclusion, adults based their judgements solely on the colour of the object.
Acknowledgments: The Leverhulme trust, Lancaster Babylab
Hide abstract
P1.8 The rubber hand illusion in merged vision with another person
Okumura, K., Ora, H. & Miyake Y. Department of Computer Science, Tokyo Institute of Technology
Show abstract
Illusions using multisensory modalities have given us important knowledge as to how we perceive our body. The rubber hand illusion (RHI), one of the well-known illusions, has been widely investigated that synchronous tactile stimuli on a person’s hidden real hand and an aligned visible rubber hand placed in front of them results in a feeling of ownership of the fake hand. However, the effect of the intensity of visual stimulation on the RHI is not clarified. In this study, we examined whether RHI is elicited by merged first-person visions of the participants (live) and another (recorded) so as to investigate the effect of the intensity of visual stimuli of the rubber hand and one’s own hand. Participants were presented the videos showing their real hands on their right side and the virtual hand of another person to the left of the real hands, which were generated by merging two first-visions through the head-mounted display. The camera was attached to the display to acquire live first-person visions of participants. Then, their right hands were stimulated by a paintbrush synchronized or asynchronized with the strokes on the virtual hand for 150 seconds. The blending ratio of live (participants) to recorded (another person) was 2:8, 5:5 or 8:2. The illusion of ownership was evaluated by questionnaires and proprioceptive drift, the difference between the estimated position of the real hands before and after the stimulation. The RHI was clearly observed only when the blending ratio was 2:8 and touch condition was synchronous, while not much effect was observed in the other conditions. Our results show that if making participants’ hands less visible and the virtual hand more visible, the RHI is induced by the combination of the synchronized tactile and visual stimuli of merged first-person visions of the participants and another.
Acknowledgments: This work was partly supported by JSPS KAKENHI Grant Number 16H06789 and JST-COI
Hide abstract
P1.9 Developmental susceptibility to visuospatial illusions across vision and haptics
Holmes, C.A., Cooney, S.M., & Newell, F.N. Trinity College, University of Dublin
Show abstract
Developmental studies of susceptibility to visuospatial illusions are limited and inconclusive [1], especially those that contrast perception across multiple sensory modalities [2, 3, 4]. Here, we examined spatial perception using three classic illusions – the Ebbinghaus, Muller-Lyer, and Vertical-Horizontal illusions – in which children explored the stimuli across three conditions: visual only, haptic only or bimodal. Specifically, we tested younger (6-8 years) and older children’s (9-12 years) ability to discriminate spatial extent in the presence (illusion trials) or absence of illusory contexts (i.e. control trials per illusion consisting of circles, horizontal lines, and vertical lines respectively). Spatial perception in all trial types was tested using a 3-AFC paradigm in which participants were presented with two adjacent stimuli and indicated which of the two was the larger or if both were the same. The results suggest both age groups were susceptible to all three illusions in vision and touch. Visual dominance in the control condition is consistent with previous reports suggesting developmental shifts in multisensory integration for small-scale spatial perception relating to object perception [5], as well as large-scale spatial perception for navigation [6]. Importantly, by examining the effect of modality on visuospatial susceptibility across dimensions, these findings can be used to inform mathematical pedagogy related to geometry in sighted as well as visually-impaired populations.
1. Doherty, M. et al. (2010). Developmental Science.
2. Duemmler, T. et al. (2008). Experimental Brain Research.
3. Hanisch, C., et al. (2001). Experimental Brain Research.
4. Mancini, F. et al. (2010). Quarterly Journal of Experimental Psychology.
5. Gori, M. et al. (2008). Current Biology.
6. Nardini, M. et al. (2008). Current Biology.
Acknowledgments: European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 732391
List, S.M., McCormick, K., Lacey, S., Sathian, K. & Nygaard, L.C. Emory University
Show abstract
Humans share consistent associations, known as crossmodal correspondences (CCs), between seemingly unrelated features in different sensory modalities. While one of the fundamental properties of language is the assumed arbitrariness between sound and meaning, sound symbolism is a notable exception that has been studied empirically using CCs between auditory pseudowords (e.g. ‘loh-moh’) and visual shapes (e.g. blob). Others have investigated auditory-visual CCs and shown that modulating the physical dimensions that define them can influence multisensory integration. However, the characteristics of the auditory and visual stimuli that underpin sound symbolic CCs are not well understood. Here, we used representational similarity analysis to examine the relationships between physical stimulus parameters and perceptual ratings for a range of auditory nonwords (n = 537 stimuli; 31 participants) and visual shapes (n = 90 stimuli; 30 participants), which varied in ratings of roundedness and pointedness. Representational dissimilarity matrices (RDMs) for the perceptual ratings of the auditory and visual stimuli were significantly correlated (r = 0.66, p<0.0001), indicating a close relationship between ratings in the two modalities. In both visual and auditory domains, RDMs of multiple stimulus measures were significantly correlated with RDMs of the perceptual ratings. For instance, the RDM for the fast Fourier transforms of the auditory nonwords, reflecting their spectral composition, was significantly correlated with the RDM of the auditory perceptual ratings (r = 0.28, p<0.001), while the RDM for a measure capturing the spatial profile of the visual shapes, termed the simple matching coefficient, exhibited a significant correlation with the RDM of the visual perceptual ratings (r = 0.28, p<0.001). This research provides insights into the fundamental nature of sound symbolic CCs and how they might evoke specific interpretations of physical meaning in natural language at the physical and perceptual levels.
Hide abstract
P1.11 Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception
Lindborg, A., Baart, M. & Andersen, T. S. Technical University of Denmark
Show abstract
Incongruent audiovisual (AV) speech can cause perceptual illusions such as the McGurk fusion, in which incongruent inputs (such as auditory ‘ba’ and visual ‘ga’) are fused to a novel percept (eg. ‘da’). However, switching the modality of the cues produces a McGurk combination (i.e. auditory ‘ga’ and visual ‘ba’ are perceived as ‘bga’ or ‘gba’). Previous literature has shown differential AV integration patterns for fusion and combination stimuli, suggesting differential processing driving the respective illusions.
We explored whether electroencephalographic (EEG) correlates of audiovisual integration – visual-induced suppression of the auditory N1 and P2 event-related potential (ERP) components – are different for fusion and combination stimuli. We analysed EEG from 32 subjects, comparing the ERPs of the auditory (A) component to the audiovisual minus the visual component (AV-V) for congruent, fusion and combination stimuli.
We found that all AV stimuli suppressed the N1 and the P2 compared to A, and the P2 amplitude was the same for fusions and congruent stimuli. Critically however, P2 suppression was larger for combinations than for congruent stimuli, and the differences between these two types of stimuli extended well beyond the P2.
A possible interpretation of our results is that differences in mismatch processing contribute to the difference between the combination and fusion percept from the P2 and onwards, in line with the observation that fully incongruent AV speech yields a bigger P2 suppression than congruent AV speech (Stekelenburg & Vroomen 2007). Further investigations will have to be made to confirm this interpretation, but it is nevertheless clear that the fusion and combination illusion have distinct electrophysiological signatures.
Hide abstract
P1.12 Auditory feedback effects on spatial learning: shape recognition after audio-motor training
Martolini, C., Cappagli, G., Campus, C. & Gori, M. DIBRIS, University of Genoa – U-VIP, Italian Institute of Technology (Genoa)
Show abstract
Recent reports demonstrated that the use of auditory feedback to complement or substitute visual feedback of body movements is effective in conveying spatial information and it can enhance sensorimotor learning. For instance, it has been shown that the curvature of a shape can be conveyed with solely auditory information in sighted individuals (Boyer et al., 2015) and that blind people can recognize objects by extracting shape information from visual-to-auditory sensory substitution soundscapes (Amedi et al., 2007). However, it still remains unclear whether sensorimotor integration might enhance auditory perception.
In the present study, we tested the possibility to improve auditory shape recognition by performing a specific training based on auditory and motor feedback. To assess the effects of the training, we focused on two features related to auditory shapes: semantic meaning and smoothness of contours.
We evaluated a group of sighted adults in two sessions of an auditory shape recognition task, in which each blindfolded participant was asked to identify auditory shapes resulting from the activation of consecutive loudspeakers embedded on a fixed two-dimensional vertical array. Between the sessions, participants performed an audio-motor training, in which they were asked to reproduce a simple (one-joint) or complex (two/three-joints) arm movement initially carried out by the experimenter. The experimenter had an audio source attached to both his wrists, while participants were provided with only one audio source on the dominant wrist as a feedback conveyed by his own movements.
The main findings resulting from preliminary analysis suggest a stronger effect of the training on the semantic meaning factor, compared to the smoothness of contours factor.
Our work suggests that the introduction of combined auditory and motor feedback in a rehabilitative contest might improve cross-modal re-calibration and shapes recognition at a cognitive level.
Hide abstract
P1.13 Human echolocators achieve perceptual constancy by discounting variations in click spectrum
Norman, L. J. & Thaler, L. Durham University
Show abstract
Perceptual constancy refers to the ability to perceive properties of objects as being stable despite changes in the sensation of those objects caused by extraneous conditions. Humans primarily use vision for perceiving distal objects, but some people perceive distal objects through the interpretation of sound echoes that the objects reflect – a skill known as echolocation. Human echolocators typically produce tongue clicks in order to do this, but these clicks can vary in their intensity or spectrum, causing extraneous changes in the acoustic properties of echoes reflected from objects. Here we tested whether humans are able to achieve perceptual constancy in echolocation, by testing whether they can discount changes in an object’s echo that are brought about by variations in the click. We also considered the effect of echolocation experience in this context by testing expert echolocators as well as newly trained sighed and blind people. On each trial in our task, participants listened to two successive echolocation sounds (i.e. click-echo pairs) through headphones and judged whether the difference in the echoes between the two echolocation sounds was due to a difference in the click emission or a difference in the object that had reflected the sound. For click or object differences carried through spectral changes, blind expert echolocators were able to perform this task well, and much better than sighted and blind people new to echolocation. For differences carried through intensity changes, however, performance in all groups was not much greater than chance. Overall, the data suggest that human echolocators can use spectral information to achieve perceptual constancy for objects across variations in the spectrum of the click emission. This ability depends on the degree of experience using echolocation, implying that perceptual constancy in a novel sensory skill can be acquired through learning.
Acknowledgments: This work was supported by the Biotechnology and Biological Sciences Research Council grant to LT (BB/M007847/1)
Hide abstract
P1.14 Occipital early responses to sound localization in expert blind echolocators
Tonelli A., Campus C., & Gori M. Istituto Italiano di Tecnologia
Show abstract
Echolocation is an ability that few blind individuals developed to orient themselves in the environment by using self-generated sounds. In a recent work (Vercillo et al., 2014), it was showed that expert blind echolocators have better sense of auditory spatial representation comparing to congenitally blind individuals and performance similar to sighted people. In the current study, we investigate the neural correlate related to an auditory spatial bisection task in congenital blind, expert and not expert echolocators. We found an early activation (50-90 ms) in the occipital cortex related to sound stimulation just for the group of expert echolocators. Moreover, the early activation in the occipital cortex was contralateral to the position of the sound to localize. Similar results were already found by Campus at al. (2017), in which they show the same activation also in sighted people, for the same task. All these findings candidates echolocation as a good substitute for vision to improve general sense of auditory space, thanks to a process of sensory calibration using sounds.
Hide abstract
P1.15 Rapid, flexible cue combination with augmented and familiar sensory signals
Negen, J., Wen, L., Probert, H., Thaler, L. & Nardini, M. Durham University
Show abstract
Humans are highly effective at dealing with noisy, probabilistic information from multiple sensory systems in familiar settings. One hallmark of this is cue combination: combining two independent noisy sensory estimates to increase precision beyond the best single estimate, taking into account their reliabilities. We will present evidence that this process also occurs in situations akin to common methods for augmented sensory perception, specifically human echolocation and devices translating distance measurements into vibrotactile signals (like the EyeCane). Following just two hours of training with one of the new sensory skills (N=12 each), participants were asked to estimate distances with their new skill, with a noisy visual cue, or with both vision and augmented perception. Participants were more precise given both cues together versus the best single cue, reducing variable error by 16% (echo-like) and 13% (vibrotactile), both p-values below .001. For the echo-like cue, this persisted when we changed the auditory frequency without feedback (no similar manipulation was done for vibrotactile). In both cases, reliability changes also led to a re-weighting of cues, meeting the predictions of a core principle of Bayesian combination, rather than suggesting use of a decision rule for specific stimuli. These results show that the mature multisensory apparatus can learn to flexibly integrate new skills into its repertoire on a rapid timescale, contrary to both model predictions and previous empirical results with another augmented sensory system (the feelSpace belt). Our results have applications to (1) understanding the role of experience vs maturation during development of cue combination, (2) the use of sensory augmentation to meet clinical needs (e.g. combining a new sensory skill with remaining vision for partial vision loss), as well as (3) enhancing healthy perception in novel ways (for example, a surgeon flexibly combining native sight and augmented hearing as guides).
Hide abstract
P1.16 Multimodal feedback for spatial learning: comparing the effects on sighted and visually impaired individuals.
Cappagli G., Cuppone A.V., & Gori M. Istituto Italiano di Tecnologia
Show abstract
In the last years, the role of multisensory training in spatial learning has been taken into account, e.g. providing evidence that combined multimodal compared to unimodal feedback improves responsiveness to spatial stimuli. To date, it still remains unclear how training conditions influence spatial enhancement and to which extent multisensory training enhance spatial perception in the case of sensory loss. Here we investigated the effects of active and passive audio-motor training on spatial perception in the auditory and proprioceptive domains in sighted and blind participants.
We found that for sighted participants, the passive multimodal training (both auditory and proprioceptive passive movements) is more beneficial than both the active multimodal training (both auditory and proprioceptive active movements) and the unimodal training (only auditory or proprioceptive) and spatial improvement generalizes to the untrained side of the body only when the training is passive. Moreover, we found that the passive multimodal training produces a similar spatial enhancement in blind participants, especially in the proprioceptive domain, indicating that combined sensorimotor signals are effective in recalibrating auditory and proprioceptive spatial perception in the case of visual loss. A possible interpretation of such results is that the perceptual benefit obtained with the multimodal training determines the refinement of coherent audio-motor spatial maps that are necessary to orient body in space.
Acknowledgments: This publication is part of weDRAW project that has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 732391
Hide abstract
P1.17 The early auditory-evoked cortical response predicts auditory speech-in-noise identification and lipreading ability in normal-hearing adults
Dias, J.W., McClaskey, C.M. & Harris, K.C. Medical University of South Carolina
Show abstract
Perceivers suffering from sensory deprivation or loss have been known to recruit more cross-sensory resources when navigating the world, a phenomenon typically associated with cross-sensory neural plasticity (Rosenblum, Dias, & Dorsi, 2016). In the auditory domain, cortical evoked responses, specifically P1 and N1, can be used to assess the timing and efficacy of sensory transmission between the ear and the brain. Reduced amplitudes and increased latencies associated with auditory neuropathy and presbycusis predict degraded auditory perception of both speech and non-speech events (Harris, Wilson, Eckert, & Dubno, 2012; Narne & Vanaja, 2008). However, subtle individual differences in the auditory-evoked responses of normal hearing adults also predict variability in auditory perception (Narne & Vanaja, 2008). We predict that smaller auditory-evoked responses in normal-hearing adults will be associated with better processing of visual information. The current investigation examines the extent to which individual differences in the auditory-evoked response predict auditory and visual speech identification in normal-hearing adults. Eighteen normal-hearing young adults between the ages of 19 and 30 (13 female) participated in this study. P1 and N1 components were extracted from click-induce auditory-evoked responses. Participants identified auditory, visual (lipread), and audiovisual speech within three levels of auditory noise (-5 dB, 0 dB, and +5 dB auditory signal-to-noise ratios). Smaller P1 amplitudes predicted poorer auditory speech identification in the most difficult listening conditions. However, smaller P1 amplitudes also predicted better visual speech identification, suggesting that individuals with smaller auditory-evoked responses may be more proficient lipreaders. Magnetic Resonance Spectroscopy (MRS) and Magnetic Resonance Imaging (MRI) from a subset of participants will be used to explore the neuroanatomical correlates underlying the relationship between the auditory-evoked response and auditory-visual speech processing. Results will be discussed relative to the potential factors affecting auditory-evoked responses and how these factors may contribute to cross-sensory recruitment of visual processes.
Hide abstract
P1.18 Temporal tuning of immediate and repeated exposure to audio-visual spatial discrepancies
Goodliffe, J.P., Roach, N.W. & Webb, B.S. University of Nottingham
Show abstract
Human perception of an external event is typically a coherent multisensory experience. To keep the senses in register, the brain appears to monitor the correspondence of different sensory inputs over different timescales and correct for any discrepancies between them. Exposure to a single spatially discrepant audio-visual stimulus can lead to the ventriloquist effect (VE), whereby the perceived location of the sound is biased towards the location of the visual stimulus. Repeated exposure to audio-visual stimuli with a consistent spatial offset causes a lasting remapping of auditory space – known as the ventriloquist aftereffect (VA). Despite their functional similarity, few studies have systematically compared the characteristics of these effects within the same experiment. Here we examine the temporal tuning of both effects using a common method to measure perceptual bias. Audio-visual stimuli were presented at 15 azimuths over a range of 70 degrees. Visual stimuli were 150ms luminance-defined Gaussian blobs projected onto a wide-field, immersive visual display. Auditory stimuli were 150ms bursts of pink noise, convolved with generic head related transfer functions and reverberant room cues and played through headphones. In VE blocks, audio-visual stimuli were spatially separated by +/- 10 degrees with stimulus onset asynchronies from 0 to 1400ms audio lead. Participants reported their perceived auditory location while ignoring visual stimuli. In VA blocks, participants were repeatedly exposed to consistent spatial and temporal discrepancies (75 pairs) before reporting perceived auditory location on audio-only test trials. Our results show that both tasks produce robust biases in the perceived locations of auditory stimuli. While the VE was consistently larger than the VA with synchronous audio-visual stimuli, both effects decreased with audio-visual asynchrony at a comparable rate. The similar temporal tuning profiles indicates the effects are closely related, either by the VE facilitating the VA, or via co-dependencies with earlier multisensory binding processes.
Hide abstract
P1.19 Audiovisual crossmodal correspondences between bubbles’ size and pouring sounds’ pitch in carbonated beverages
Roque, J.R., Lafraire, J.L., & Auvray, M.A. Centre de Recherche Pernod Ricard, France; Center for Food and Hospitality Research, Institut Paul Bocuse, France; Sorbonne Université, UPMC, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), F-75005 Paris, France
Show abstract
The literature on crossmodal correspondences has reported an implicit association between auditory pitch and the size of visually-presented circles. However, whether more ecological and complex audiovisual stimuli are stable enough to allow for a pitch-size correspondence effect remains to be investigated. Based on recent studies, two features of carbonated beverages have been selected as the ecological counterparts of the above-mentioned pitch and circles. These two features were bubbles’ size (small vs. big) and the pitch of a pouring sound of a carbonated beverage (high-pitched vs. low-pitched). To study a potential crossmodal correspondence between these attributes, a modified version of the Implicit Association Test (IAT) was used. The participants had to respond to four unimodal stimuli that were either visual or auditory, which were paired either congruently (small bubbles and high-pitched sound; big bubbles and low-pitched sounds) or incongruently (the reverse associations). The analysis of the latency and accuracy of the participants’ responses confirmed the existence of a pitch-size correspondence effect between these different attributes. A Go/No-go Association Task (GNAT) has subsequently been used to evaluate the relative strengths of these associations, through the analysis of the sensitivity in the participants’ responses. Our results highlight the existence of crossmodal correspondences between perceptual features involved in the multisensory experience of carbonated beverages. Since these sensory cues have been reported to influence the perception of freshness, we conclude that these correspondences could be triggered to ease consumers’ categorization of a given product as being fresh or even lead to freshness enhancement. Such perceptual mechanisms represent promising levers on the acceptance and appreciation of beverages.
Hide abstract
P1.20 Face Viewing Behavior Predicts Multisensory Gain During Speech Perception
Rennig, J., Wegner-Clemens, K. & Beauchamp, M.S. Baylor College of Medicine
Show abstract
During face viewing, some individuals prefer to fixate the mouth while others fixate the eyes; the consequences of this difference are unknown. During speech perception, viewing the talker’s face improves comprehension because mouth movements are associated with speech sounds. Individuals who have a history of mouth fixation might have formed stronger associations between visual and auditory speech, resulting in improved comprehension. To test this idea, we first measured eye movements during a face-viewing task in which mouth movements were unimportant. Replicating previous work, there was substantial interindividual variability in the amount of time participants fixated the mouth, ranging from 11% to 99% of total fixation time. Next, we measured eye movements and comprehension during perception of noisy auditory speech with or without visual speech. When visual speech was present, all participants primarily fixated the mouth (72% to 100% of total time) and derived substantial benefit, recognizing on average 31% more words than for noisy auditory speech alone. However, there was high interindividual variability, with multisensory gain ranging from 6% to 56%. The benefit of visual speech for each participant was predicted by the eye movements made during the initial face-viewing task (r = 0.44, p = 0.01) but not by eye movements during the noisy speech task (r = 0.05, p = 0.77), an observation confirmed with Bayesian model comparison. Participants who fixated the mouth when it was not important (during the initial face-viewing task) received more benefit from fixating the mouth when it was important (during the noisy speech task). These findings suggest an unexpected link between eye movement behavior during face viewing and audiovisual speech perception and suggests that individual histories of visual exposure shape human abilities across cognitive domains.
Hide abstract
P1.21 Audiovisual recalibration and selective adaptation for vowels and speaker sex
Burgering, M. A., Baart, M. & Vroomen, J. Tilburg University
Show abstract
Humans quickly adapt to variations in the speech signal. Adaptation may surface as recalibration, a learning effect driven by error-minimization between lipread (or lexical) information and the auditory speech signal (e.g., listeners report more /b/-responses to an ambiguous test sound halfway between /b/ and /d/ if the ambiguous sound was previously combined with lipread /b/), or as selective adaptation, a contrastive aftereffect driven by “neural fatigue” of auditory feature detectors (e.g., listeners report fewer /b/-responses to the ambiguous test sound if preceded by clear auditory /b/). Here, we examined for the first time if these assimilative and contrastive aftereffects occur for vowels and speaker sex using multidimensional speech sounds. Participants were exposed to videos of a male/female speaker pronouncing /e/ (in the context of beek) or /ø/ (in the context of beuk) that were paired with clear or ambiguous vowels of clear (male/female) or androgynous speaker sex. In a subsequent test phase, they then categorized test sounds for vowel identity or speaker sex. With identical adapter stimuli, audiovisual recalibration and selective adaptation could be demonstrated for both vowels and speaker sex.
Hide abstract
P1.22 Crossmodal correspondences between pitch, retinal size, and real-world size
Janini, D. & Konkle, T. Harvard University
Show abstract
Natural crossmodal mappings exist between visual size and auditory pitch: small circles associate with high tones and big circles associate with low tones (Evans and Treisman, 2010). Given proposals that such mappings originate from statistical learning in real-world environments, we first tested the pitch-visual size relationship, and then explored whether this crossmodal relationship extended to the familiar size of real-world objects. In the replication studies, participants judged whether a circle on the screen was big or small (direct task) or judged whether the stripes in the circle were oriented left or right (indirect task), ignoring the high or low tone that preceded circle onset by 150ms. Reaction times were indeed faster when pitch was congruent with circle size, but this effect was only observed for the direct task and was not replicated when the circles were embedded in noise to increase task difficulty. In the extension studies, participants were shown pictures of real-world objects (e.g. paperclip, desk) at the same visual size and responded whether the object was big or small in the world, ignoring the same tone stimuli from the prior studies. We found that reaction times were faster for congruent than incongruent pairings in the direct task, showing this mapping may naturally extend to real-world objects. However, as in the previous studies, the effect was not evident in the indirect task, nor when the images were embedded in noise. Previous researchers have proposed that crossmodal correspondences allow for more accurate estimates of environmental properties in noisy multi-sensory environments (Spence, 2011). Our findings confirm that natural correspondences between pitch, retinal size, and real-world size do exist. However, these correspondences may not be robust enough to guide behavior in noisy real-world settings.
Hide abstract
P1.23 Adapting emotions across the senses: the benefit of congruent over incongruent audiovisual emotional information depends on the visibility of emotional faces
Izen, S.C., Morina, E., Leviyah, X., & Ciaramitaro, V.M. University of Massachusetts Boston
Show abstract
Correctly interpreting the emotional state of others is crucial for social interaction. Often, this involves integrating information across faces and voices. The current study investigated how emotional sounds influence the perception of emotional faces. Although exposure to emotional faces results in an opposite aftereffect, for example, adapting to happy faces biases neutral faces to be perceived as angrier (Rutherford et al., 2008), conflicting evidence exists for how emotional faces and voices interact. Given the principle of inverse effectiveness, that multisensory integration is most robust when individual stimuli are less effective (Stein & Meredith, 1993), we hypothesized increased multisensory interactions for faces of decreased salience. We quantified adaptation strength as a proxy for the strength of multisensory interactions for emotional faces in noise, decreased in visual salience, versus unedited emotional faces.
Participants judged a series of 8 face identities morphed on a continuum from 80% angry to 80% happy as either happy or angry at baseline and post-adaptation to congruent (100% happy faces and positive sounds) or incongruent (100% happy faces and negative sounds) stimuli. Adapting faces were unedited or embedded in noise. For each participant, we calculated the face morph perceived as neutral, equally likely to be judged happy or angry, and the change in this point of subjective equality (PSE) post-adaptation. We expected greater PSE shift magnitude for congruent versus incongruent emotions, with the greatest shift post-adaptation for faces of decreased salience.
Our results suggest adaptation to congruent and incongruent multimodal stimuli biases neutral faces to be perceived more negatively, with significantly larger shifts following exposure to congruent versus incongruent stimuli, but only for faces in noise. This suggests multisensory integration of emotional information also follows the principle of inverse effectiveness, being more effective for concurrent auditory information of matched valence when faces have diminished perceptual salience.
Hide abstract
P1.24 Naturalistic Stimuli Reveal Selectivity for Eye and Mouth Movements within the Human STS
Zhu, L.L. & Beauchamp, M.S. Baylor College of Medicine
Show abstract
Speech perception is a multisensory process that combines visual information from the talker’s face with auditory information from the talker’s voice. Posterior temporal cortex, including the superior temporal sulcus (STS) is a key brain locus for multisensory speech perception. Previously, we used BOLD fMRI to demonstrate anatomical specialization within the STS. More anterior regions of the STS preferred visually-presented mouth movements while more posterior regions preferred eye movements. These experiments used only 10 different videos (all recorded in a laboratory setting) a limited stimulus set that is not an accurate representation of everyday visual experience. To examine the generalizability of our results, we created 120 videos of naturalistic mouth and eye movements taken from YouTube. BOLD fMRI data was collected from 16 participants using a Siemens 3T Prisma scanner as they viewed five repetitions of each video (600 total videos) while performing a simple task (discriminating mouth from eye videos). First, we replicated our previous result: more anterior regions preferred mouth videos and more posterior regions preferred eye videos. An item-wise analysis that treated stimulus as a random effect revealed substantial inter-stimulus differences within the preferred class, ranging from 0.07% to 0.85% for mouth videos in mouth-preferring STS. To examine the visual features responsible for these differences, we constructed a linear regression model with 5 variables for each stimulus (visual motion energy, contrast, mouth size, mouth motion, mouth typicality). Model fits with and without each feature were compared using the Akaike Information Criterion. Within mouth-selective voxels, the degree of mouth motion within each video was the best predictor of the BOLD response. These findings confirm and extend previous findings of functional specialization related to visual speech processing within the human STS.
Hide abstract
P1.25 Performing a task jointly modulates audiovisual integration in timing and motion judgements
Wahn, B., Dosso, J., Tomaszewski, M. & Kingstone, A. University of British Columbia
Show abstract
Humans constantly receive sensory input from multiple sensory modalities. Via the process of multisensory integration, this input is often combined into a unitary percept. Recent research suggests that audiovisual integration is affected by social factors (i.e., when a task is performed jointly rather than alone) in a crossmodal congruency task (Wahn, Keshava, Sinnett, Kingstone, & Kӧnig, 2017). However, these findings were concerned with reaction time data and thus it cannot be excluded that social factors affected the preparation or execution of the motor response rather than the audiovisual integration process itself. To address this point, we investigated whether social factors affect perceptual judgements in two tasks (i.e., a motion discrimination task and a temporal order judgement task) that previous research has shown to yield reliable audiovisual integration effects which alter perceptual judgements (Soto-Faraco, Lyons, Gazzaniga, Spence, & Kingstone, 2002; Morein-Zamir, Soto-Faraco, & Kingstone, 2003). In both tasks, pairs of participants received auditory and visual stimuli and each co-actor was required to make perceptual judgements for one of the sensory modalities in a joint and an individual condition. We find that audiovisual integration effects are reduced when participants perform the tasks jointly compared to individually. In an additional experiment, this effect did not occur when participants perform the tasks alone while being “watched” by a camera (i.e., an implied social presence). Overall, our results suggest that social factors influence audiovisual integration, and that this effect is specific to live partners and not sources of implied social presence.
Acknowledgments: We acknowledge the support of a postdoc fellowship of the German Academic Exchange Service (DAAD) awarded to BW.
Hide abstract
P1.26 Audiovisual integration of spatial stimuli is affected by performing a task jointly
Wahn, B., Keshava, A., Sinnett, S., Kingstone, A. & Kӧnig, P. University of British Columbia
Show abstract
Humans continuously receive sensory input from several sensory modalities. Via the process of multisensory integration, this input is often integrated into a unitary percept. Researchers have investigated several factors that could affect the process of multisensory integration. However, in this field of research, social factors (i.e., whether a task is performed alone or jointly) have been widely neglected. Using an audiovisual crossmodal congruency task, we investigated whether social factors affect audiovisual integration. Pairs of participants received congruent or incongruent audiovisual stimuli and were required to indicate the elevation of these stimuli. We found that the reaction time cost of responding to incongruent stimuli (relative to congruent stimuli) was reduced significantly when participants performed the task jointly compared to when they performed the task alone. These results extend earlier findings on visuotactile integration by showing that audiovisual integration of spatial stimuli is also affected by social factors.
Acknowledgments: We acknowledge the support of a postdoc fellowship of the German Academic Exchange Service (DAAD) awarded to BW. Moreover, this research was supported by H2020—H2020- FETPROACT-2014641321—socSMCs (for BW & PK).
Hide abstract
P1.27 The Effect of Multisensory Temporal Congruency on Pleasure
Yeh, M.S. & Shams, L. University of California, Los Angeles
Show abstract
Pleasure is a commonly shared experience, and yet perceptual pleasure has rarely been studied. Current research has only discussed pleasure in the realm of aesthetics, and in unisensory modalities. Here, we aim to investigate the relationship between multisensory features and pleasure through the elicited pleasure of paired associated visual and auditory features. This will not only shed light on the experience of pleasure, but also potentially elucidate the relationship between perception and emotion. Our study investigated amodal multisensory association and its effect on felt pleasure. We examined amodal congruency through temporal synchronization between video and soundtrack. Video cuts and emphasized beats in an accompanying soundtrack served as temporal markers, and were synchronized or displaced to create a congruent and incongruent condition. Participants rated each video clip on pleasantness. We found a preference for temporal congruency, suggesting that temporal congruency may be more pleasant. Importantly, this was the case despite the fact that the majority of participants did not notice any difference in temporal congruity across trials. We are also currently investigating the effect of “synaesthetic correspondences” on pleasure in the form of crossmodal associations. Our results indicate that multisensory perceptual features do influence our experience of pleasure, although the specifics remain to be discovered. Future research in this direction has the potential to desirably enhance learning by increasing the associated experienced pleasure.
Hide abstract
P1.28 Role of auditory and visual acuities in temporal binding window measurement
Unnisa Begum, V. & Barnett-Cowan, M. University of Waterloo
Show abstract
The integration of multisensory information allows the central nervous system (CNS) to create a coherent representation of events in the world. Sensory information from more than one modality are perceived as simultaneous when they co-occur within a specific range of temporal offsets called the temporal binding window (TBW)1 . The width of the TBW increases with age, leading to difficulty in discriminating temporal order (TOJ)2 and simultaneity (SJ)3 of audiovisual (AV) events. The ability to integrate AV information is dependent on the precise spatial4,5 and temporal discrimination6 of auditory and visual stimuli. With age, visual acuities decline7,8 and hearing impairments are also common with 60% of individuals aged over 65 years exhibiting gradual decline in auditory sensitivities. Studies investigating audiovisual integration do not typically assess unisensory acuity of participants but rather recruit those who self-report as having normal hearing and vision. As individual and group differences in the perceived timing of multisensory events could largely be explained by differences in sensory acuity, here we propose an economical and quick approach to measure unisensory auditory and visual acuities using established screening tests. We recruited participants with self-reported normal vision and hearing. Visual acuity was performed using the Freiburg visual acuity test, which is freely available online9. Auditory acuities were determined using a smart phone (Apple, iOS) application ‘Ear Trumpet’ (PraxisBiosciences, Irvine, California)10. For both SJ and TOJ tasks, participants indicated whether a single auditory beep (1850Hz, 7ms duration) and a flash of light (1cm diameter, 17ms duration) occurred at the same time (SJ task) or if the auditory beep or flash of light occurred first (TOJ task). We will present our preliminary results for how the point of subjective simultaneity and just noticeable difference of the SJ and TOJ tasks are affected by differences in auditory and visual acuity measures.
Hide abstract
P1.29 Robust temporal averaging of time intervals between action and sensation
Chen, L. Peking University
Show abstract
Perception of time interval between one’s own action (a finger tap) and the sensory feedback thereof (a visual flash or an auditory pip) is critical for precise and flexible control of action during human-machine interaction and behavioral decision. Previous studies have employed sensorimotor synchronization and sensory-motor temporal recalibration tasks to examine the potential neuro-cognitive mechanism underlying recalibrated representation of timing. In the present study, whether and how temporal averaging (i.e., ‘ensemble coding’) of the multiple intervals in a train of action-sensory feedback events was investigated. In unimodal task, participants voluntarily tapped their index finger at a constant pace while receiving either only visual feedback (flashs) or only auditory feedback (pip) throughout the train. In crossmodal task, for a given train each tap was accompanied randomly with either visual feedback or auditory feedback. When the sequence was over, observers produced a subsequent tap which elicited a target interval between the tap and its auditory/visual feedback. In both tasks, they were required to make a two alternative choice to indicate whether the target interval is shorter or longer than the mean intervals in the preceding train. In both scenarios, participants’ perception of target intervals was assimilated to the mean intervals associated with specific bindings of action and sensation, showing a robust temporal averaging in the loop of action and sensation. Moreover, the precision of temporal averaging was dependent on the variances of the time intervals and individual sensory modality.
Hide abstract
P1.30 Crossmodal associations modulate multisensory integration: modifying causal priors of simple auditory and visual stimuli
Tong, J., Bruns, P., Kanellou, A. & Roeder, B. Biological Psychology and Neuropsychology, University of Hamburg
Show abstract
Skilled puppeteers conceal their lip movements while moving a puppet’s mouth synchronously with speech sounds to produce the illusion of the voice as originating from the puppet. This “ventriloquism effect” results from the optimal integration of multiple sensory cues. According to the “causal inference” framework, when auditory and visual stimuli have a high prior probability of being causally linked, based on context and past associations, there is strong multisensory integration; on the contrary, when auditory and visual stimuli have a low prior probability of being causally linked, there is weak multisensory integration. Here, we review a behavioral study in which we presented pairs of audio and visual stimuli, during association blocks, as completely congruent or drastically incongruent in both space and time with the goal of driving causal priors in opposite directions. Following these association blocks, each pairwise combination of stimuli was presented in a typical ventriloquism-effect-paradigm with predetermined disparities and stimulus onset asynchronies. Stimuli that had been congruently paired in space and time were subsequently more integrated overall (larger ventriloquism effect) compared to previously unpaired stimuli and compared to stimuli that had been incongruently presented in space and time. A follow-up experiment investigated how auditory stimuli would be localized when presented with two competing visual stimuli on either side, subsequent to association blocks. In agreement with results from the first experiment, auditory stimuli were more shifted toward visual stimuli with which they were congruently paired than for visual stimuli with which they were not congruently paired. Our findings provide support for the causal inference framework, suggesting the existence of causal priors between audio-visual stimuli that can be shaped by experience.
Acknowledgments: Work funded by the German Research Foundation (DFG) award TRR 169, Crossmodal Learning
Hide abstract
P1.31 Different processing of rapid recalibration to audio-visual asynchrony between spatial frequencies
Takeshima, Y. Doshisha University
Show abstract
The processing of audio-visual integration is affected by the features of visual stimuli. Our previous studies have found that the spatial frequencies of visual stimuli influence this processing. Synchronous perception between visual and auditory stimuli is particularly modulated by spatial frequency. High spatial frequency increases the difference between physical and subjective synchrony. On the other hand, there is the function of recalibration to audio-visual asynchrony. Recent studies have shown that audio-visual recalibration is induced by preceding asynchronous trials in an experiment without explicit periods of adaptation. The present study investigated the effects of spatial frequency on the process of rapid recalibration to audio-visual asynchrony. In this experiment, Gabor patches of two spatial frequencies (i.e., 1 or 5 cycle/degrees) were used as visual stimuli. These Gabor patches were presented with pure tone in various stimulus onset asynchronies, and participants were instructed to respond whether audio-visual synchrony or asynchrony was present. The results indicated that the difference between physical and subjective synchronies was larger in high spatial frequency than in low spatial frequency as seen in our previous study. Moreover, the rapid recalibration effect was also larger in high spatial frequency than in low spatial frequency. Therefore, high spatial frequency induced larger rapid recalibration effect to audio-visual asynchrony. This phenomenon would occur according to the large difference between physical and subjective synchronies, because this large difference is necessary for a large recalibration to asynchrony between vision and audition.
Hide abstract
P1.32 Audio-visual associations show differential effects on auditory and visual responses in the mouse OFC
Sharma, S. & Bandyopadhyay, S. Indian Institute of Technology, Kharagpur
Show abstract
In a dynamic environment where contingencies change rapidly, flexible behaviour is important. The orbitofrontal cortex (OFC) is known for its role in flexible behaviour, decision making and in coding of stimulus value. However, sensory responses in the mouse OFC are poorly understood. We investigate response properties of single neurons in the mouse OFC to multisensory stimuli and that of multisensory associations, specifically, auditory and visual. Retrograde tracer injections in OFC establishes the connectivity between OFC and other areas like PPC, Prh, TeA, potential sources of multisensory information into OFC and also secondary auditory cortex, AuD and secondary visual cortex, V2. Single unit responses to unisensory, Tone(T)/LED(V), and multisensory stimulus (T+V) were recorded with mean latency to multisensory stimulus in between that of the auditory and visual stimuli. Single neurons in the OFC were found to be primarily (70%) audio-visual in nature. Responses to multisensory stimuli varied from sublinear to supralinear showing nonlinearity in integration of the two stimuli. In order to understand the nonlinear integration underlying the response properties we parse possible types of synaptic inputs (auditory only, visual only and multisensory) onto neurons in the OFC. We used an oddball stimulation paradigm with T/V as standard and V/T as deviant and T+V/T/V as standard and T/V/T+V as deviant. Comparisons were made with trains of T/V/T+V to conclude presence of responses due to the oddball. We conclude the presence of multisensory and unisensory synapses as inputs to OFC neurons. We also see differential effects of association of T and V on auditory responses compared to visual responses, using a T and V pairing paradigm. Based on the above results we propose a hypothetical local circuit model in the OFC that integrates auditory and visual information which may affect computation of stimulus value in a dynamic multisensory environment.
Hide abstract
P1.33 Deficient prepulse inhibition of the startle reflex in schizophrenia using a cross-modal paradigm
Haß, K.H., Bak, N., Szycik, G.R., Glenthoj, B.Y. & Oranje, B. Hanover Medical School
Show abstract
Objectives: To investigate whether the typically reported deficient sensorimotor gating in patients with schizophrenia using unimodal paradigms can also be detected by a cross-modal paradigm which made use of an electrocutaneous-acoustic coupling of stimuli.
Methods: Twenty-four male schizophrenia patients took part in a prepulse inhibition (PPI) paradigm with an electrocutaneous prepulse and an acoustic startle-eliciting pulse. Their results were compared with those from twenty-three healthy males.
Results: As expected, the patients showed significantly lower PPI than controls. No associations were found between measures of illness severity and PPI.
Discussion: To the best of our knowledge, this is the first study showing reduced PPI in patients with schizophrenia by using an electrocutaneous-acoustic prepulse-pulse combination. Hence, this study gives further evidence of a modality-independent sensorimotor gating deficit in schizophrenia. Furthermore, as PPI was also lower than usual in controls using unimodal paradigms, results are interpreted in favour of longer processing
times of the electrocutaneous prepulse, which probably led to a shorter perceived stimulus onset asynchrony (SOA) in the brain.
Hide abstract
P1.34 Impaired sensory-motor learning in newly sighted children
Pfister, S., Senna, I., Wiebusch, D. & Ernst, M. O. Ulm University
Show abstract
The visual properties of an object, such as its size, influence its perceived weight and are used to predict the required fingertip forces for grasping. If there is a conflict between the visual estimates and actual object characteristics, like in the size-weight-illusion (SWI), the sensory-motor memory is updated such that fingertip forces are quickly scaled to the actual object properties (i.e., the object’s real weight), even though the SWI persists perceptually. Hence, the case of distinct processes in vision for action and perception has been made.
What happens if a person only had haptic, but no visual experience of the world so far? To this end, we investigated a sample of Ethiopian children, who previously suffered from dense bilateral cataracts and who were classified as congenitally blind. We compared their grasping performance after cataract removal when they were able to see for the first time, with typically developing children of the same age. Participants lifted three differently sized but equally weighted objects for several times while fingertip forces were recorded.
As expected, controls initially scaled the applied forces to the visually estimated weight of the objects (i.e., greater forces for bigger objects that they expected to be heavier). However, within a few trials these controls scaled the forces to the actual object weight. In contrast, previously blind children did not change their force programming throughout the experiment, hinting to a failure to appropriately use vision for action. This suggests that vision alone without prior visual experience is not enough to make accurate predictions about object weight based on visual size information, despite the fact that they can discriminate the size of the cubes. There seems to be no transfer between previous haptic experience of the world to vision in this kind of grasping task in previously blind children.
Hide abstract
P1.35 Perceptual Training of Multisensory Integration in Children with Autism Spectrum Disorder: A Single-Case Training Study
Dunham, K., Feldman, J.I., Conrad, J.G., Simon, D.M., Tu, A., Broderick, N., Wallace, M.T., & Woynaroski, T. Vanderbilt University
Show abstract
Children with autism spectrum disorder (ASD) demonstrate atypical responses to multisensory stimuli. Specifically, children with ASD exhibit wider temporal binding windows (TBWs) for audiovisual stimuli (see Baum et al., 2015), particularly for complex, social stimuli (e.g., audiovisual speech; Stevenson et al., 2014; Woynaroski et al., 2013). These disruptions in multisensory speech perception may produce cascading effects on language and communication development in children on the autism spectrum. Computer-based perceptual training programs have been shown to narrow TBWs in typically developing adults (De Niear et al., 2018; Powers et al., 2009), and it has been hypothesized that such programs may be similarly effective in children with ASD (e.g., Woynaroski et al., 2013). This pilot study represents an important first step in examining the effects of a perceptual training program on TBWs for audiovisual speech in children with ASD.
A single case (i.e., experimental) research design was utilized over six weeks. Participants were four children with ASD between 7 and 13 years old. The study used a multiple baseline across participants design. Three children participated in baseline and intervention conditions. The introduction and withdrawal of the independent variable (i.e., training) was time-lagged; one control participant who entered treatment with a narrow TBW remained in baseline throughout the study. The dependent variable was TBW derived from a simultaneity judgment task of audiovisual speech syllables. The intervention consisted of the same simultaneity judgment task with automatic, computer-delivered feedback on accuracy following each trial.
Two participants demonstrated a strong effect as a result of intervention. Additionally, the first participant to enter training demonstrated some maintenance of a narrower TBW. Results indicate TBWs in children with ASD may be malleable, but additional research is needed to have high confidence in the causal effects of the training paradigm, and to determine for whom changes in temporal binding are likely to be observed. Limitations due to study design and heterogeneity in subject age and future directions will be discussed.
Hide abstract
P1.36 The Principles of Multisensory Integration in the Rehabilitation of Hemianopia
Dakos, A. S., Jiang, H., Rowland, B. A. & Stein, B. E. Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina
Show abstract
Unilateral lesions of visual cortex induce a profound blindness in the contralateral hemifield (hemianopia). Recent results from our laboratory have demonstrated that this visual defect can be rehabilitated using a non-invasive sensory training procedure involving several weeks of repeated presentations of paired spatiotemporally concordant visual and auditory stimuli. The biological mechanisms supporting this recovery are presumed to involve plasticity mediated by visual-auditory neurons within the midbrain superior colliculus (SC). If so, this rehabilitative success should be constrained by the same spatial and temporal principles that govern the multisensory integration and plasticity of these SC neurons as demonstrated in physiological studies. The present experiments were designed to test this assumption. Animals (n=3) were first trained on a visual localization task. Hemianopia was then induced by large visual cortex lesions. After each animal’s defect was observed for 2.5 – 3 months, it was assigned one of two conditions for cross-modal training not designed to produce multisensory enhancement. Two cats received visual-auditory stimuli that were temporally concordant, but spatially disparate. The third received stimuli that were spatially concordant but temporally disparate. The rehabilitation was unsuccessful in each condition even after two months. The training procedure was then repeated for each animal using cues designed to produce multisensory enhancement (e.g., spatiotemporally concordant cues). Rehabilitation was then successful in each group within two months, thereby confirming previous observations. These data are consistent with the hypothesis that the same spatial and temporal principles that govern multisensory integration in individual SC neurons also govern the success of this cross-modal rehabilitative training program. Supported by NIH grants R01EY026916 and F31EY027686 and the Tab Williams Family Foundation.
Acknowledgments: Supported by NIH grants R01EY026916 and F31EY027686 and the Tab Williams Family Foundation.
Hide abstract
P1.37 Sub-clinical levels of autistic traits impair multisensory integration of audiovisual speech
van Laarhoven, T., Stekelenburg, J.J. & Vroomen, J. Department of Cognitive Neuropsychology, Tilburg University
Show abstract
Autism Spectrum Disorder (ASD) is a pervasive neurodevelopmental disorder characterized by restricted interests, repetitive behavior, deficits in social communication and atypical multisensory perception. ASD symptoms are found to varying degrees in the general population. While impairments in multisensory speech processing are widely reported in clinical ASD populations, the impact of sub-clinical levels of autistic traits on multisensory speech perception is still unclear. The present study examined audiovisual (AV) speech processing in a large non-clinical adult population in relation to autistic traits measured by the Autism Quotient. AV speech processing was assessed using the McGurk illusion, a simultaneity judgment task and a spoken word recognition task in background noise. We found that difficulty with Imagination was associated with lower susceptibility to the McGurk illusion. Furthermore, difficulty with Attention-switching was associated with a wider temporal binding window and reduced gain from lip-read speech. These results demonstrate that sub-clinical ASD symptomatology is related to reduced AV speech processing performance, and are consistent with the notion of a spectrum of ASD traits that extends into the general population.
Hide abstract
P1.38 Modified Medial Geniculate Projections to Auditory and Visual Cortex Following Early-Onset Deafness
Trachtenberg, B., Butler, B.E. & Lomber, S.G. University of Western Ontario
Show abstract
Following early-onset deafness, electrophysiological and psychophysical studies have demonstrated crossmodal plasticity, throughout “deaf” auditory cortex. These studies suggest that there may be a functional reorganization of cortical afferents to these reorganized regions of cortex that underlies the crossmodal plasticity. For the most part, retrograde pathway tracer studies of deposits made into auditory cortex have identified little modification in the relative distribution of thalamic and cortical neurons that project to deaf auditory cortex. However, studies of crossmodally reorganized auditory cortex consistently show increased dendritic spine density. These studies suggest that, following early-onset hearing loss, there may also be increased numbers of axon terminals in auditory cortex. To investigate this possibility, we examined efferent projections from the auditory thalamus (medial geniculate body; MGB) of hearing and early-deaf cats in order to reveal the distribution and density of synaptic boutons on thalamocortical neurons. Anterograde fluorescent dextran tracers were deposited bilaterally in the MGB in order to label axon terminals throughout cortex. Axon terminal labelling in each cortical area was computed as a percentage of all labelled terminals. In hearing cats, the auditory labelling profile of these projections is similar to those of previous tracing studies, with the largest terminal labelling in primary auditory cortex (A1), the posterior auditory field (PAF), and the anterior auditory field (AAF). However, following early-onset deafness, projections from MGB reorganize, and there is increased terminal labelling in visual cortical areas. Therefore, taken together with retrograde studies quantifying auditory thalamocortical projections, it appears that reorganization of projections to auditory cortex is not in the numbers of neurons projecting to a given area, but in the numbers of axon terminals on those neurons.
Acknowledgments: This work is supported by the Canadian Institutes of Health Research.
Hide abstract
P1.39 Perceived Simultaneity and Temporal Order of Audiovisual Events Following Concussion
Wise, A. & Barnett-Cowan, M. University of Waterloo, Department of Kinesiology
Show abstract
The central nervous system allows for a limited time span referred to as the temporal binding window (TBW) in order to rapidly determine whether multisensory events correspond with the same event. Failure to correctly identify whether multisensory events occur simultaneously and their sequential order can lead to inaccurate representations of the physical world, poor decision-making, and dangerous behavior. Damage to the neural systems that coordinate the relative timing of sensory events may explain some of the long-term consequences associated with concussion. The aim of this study was to investigate whether the perception of simultaneity and the discrimination of temporal order of audiovisual stimuli are impaired in those with a history of concussion. 50 participants (17 with concussion history) were recruited to complete audiovisual simultaneity judgment and temporal order judgment tasks. From these tasks, the TBW and point of subjective simultaneity (PSS) were extracted to assess whether the precision and or the accuracy of temporal perception changes with concussion, respectively. Results demonstrated that those with concussion history have a significantly wider TBW (less precise), with no significant change in the PSS (no change in accuracy), particularly for the TOJ task. Importantly, a negative correlation between the time elapsed between the time of concussion diagnosis and the TBW width in the TOJ task suggests that precision in temporal perception does improve over time. These findings suggest that those with concussion history display an impairment in the perceived timing of sensory events and that monitoring performance in the TOJ task may be a useful additional assessment tool when making decisions about returning to regular work and play following concussion.
Acknowledgments: NSERC Discovery Grant (#RGPIN-05435-2014) and a University of Waterloo Research Incentive Fund Grant to MB-C. We thank Robert Burns, David Gonzalez, Robyn Ibey, and Travis Wall for study design and participant recruitment and testing assistance.
Hide abstract
P1.40 Group differences in audiovisual multisensory integration in individuals with and without autism spectrum disorder: A systematic review and meta-analysis
Feldman, J.I., Dunham, K., Samuel, A., Cassidy, M., Liu, Y. & Woynaroski, T.G. Vanderbilt University
Show abstract
Differences in sensory function are now considered diagnostically significant for persons with autism spectrum disorder (ASD). A number of prior studies have evaluated how individuals with ASD differ from their typically developing peers on measures of multisensory integration (MSI). The present study systematically reviewed and quantitatively synthesized the extant literature on audiovisual MSI in individuals with ASD to (a) better estimate the effect size for group differences between individuals with ASD and TD peers and (b) test a number of theoretically and/or empirically motivated study-level factors that may moderate the overall effect (i.e., explain differential results seen across studies carried out to date).
To identify eligible studies, a comprehensive search strategy was devised using the ProQuest search engine, PubMed database, forwards and backwards citation searches, author contact, and hand-searching of select conference proceedings. Eligibility criteria for studies were (a) confirmation of ASD diagnosis via a standardized measure and (b) inclusion of a behavioral or neural measure of audiovisual integration. Data were extracted from all studies that tested between-group differences (Hedge’s g).
A random effects meta-analysis with robust variance estimation procedures was conducted with 108 effect sizes from 48 studies clustered into 32 groups based on overlapping samples between studies. A significant group difference was evident in the literature, g = –0.41, p < 0.001, with individuals with ASD demonstrating diminished audiovisual integration on average compared to TD peers. This effect was moderated by mean participant age, b = 0.03, p = 0.05, such that between-group differences tended to be larger in magnitude in samples of younger versus older chronological ages.
Results indicate that individuals with ASD demonstrate reduced audiovisual MSI compared to their TD peers in the literature, and that these differences are more pronounced earlier versus later in life. Limitations, implications and future directions for primary and meta-analytic research will be discussed.
Hide abstract
P1.41 The Relationship Between Tactilely and Visually Driven Activation of Early Visual Cortex in the Visually Impaired
Stiles, N.R.B., Choupan, J., Jung, E., Purington, C., Wang, J., Law, M., Kashani, A.H., Ameri, H., Aguirre, G K., Weiland, J.D., Patel, V.R. & Shi, Y. University of Southern California
Show abstract
Crossmodal activation of visual cortex by both tactile and auditory stimuli has been shown to occur in the fully blind. However, it has not been extensively studied how the brain transitions from normal visual processing to crossmodal processing in visual cortex as vision is progressively lost with retinal disease.
The Human Connectomes for Low Vision, Blindness, and Sight Restoration research project employs retinal, functional, and fMRI metrics to investigate the interplay of vision and tactile processing in early visual regions of individuals with low vision. In particular, we are interested in individuals with distinct scotomas in the retina that impair vision spatially. We are studying whether tactile activation of visual cortex can occur in the lesion projection zone (the projection of a scotoma onto visual cortex). Namely, does tactile stimulation excite regions of visual cortex that no longer have visual inputs. In addition, we are using functional measures to determine whether or not any partial capture of visual cortex by somatosensation in low vision individuals generates improved performance in tactile tasks.
We will present our preliminary results (N = 11) comparing patients’ functional visual and tactile capabilities with the magnitudes and locations of visual cortex activation within the brain during visual and tactile tasks after the onset of low vision. We will also compare early visual cortex activation (extent and amplitude) within the scotoma due a visual flashing light task with visual cortex activation in the same region during two tactile tasks (roughness discrimination and shape symmetry perception). Our preliminary results indicate that the level of residual visual perception plays a critical role in determining the increase in tactile crossmodal activation that is observed to occur in those with low vision.
Hide abstract
P1.42 Alpha oscillations as an index of lip-reading ability
Ganesh, A.C.(1), Dimitrijevic, A.(2) & Shahin, A.(1) 1Center for Mind and Brain, University of California, Davis CA, USA
2Otolaryngology—Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
Show abstract
Audiovisual (AV) integration of spoken language involves the visual modality acting on phonetic representations of the auditory modality. An example of such process is the McGurk illusion, whereby visual context alters the phonetic identity of the acoustic input. In the current study, we sought to understand the relationship between susceptibility to the McGurk illusion and lip-reading ability and the underlying neural mechanisms. EEG was acquired while good and poor McGurk perceivers listened to silent videos of a speaker uttering words and made judgments on whether the words were of animate (e.g., dog, cat), or inanimate (e.g., chair, desk) meaning or unsure about the meaning. We hypothesized that individuals who are susceptible to the McGurk illusion will have stronger lip-reading abilities than those who are poorly susceptible to the McGurk illusion. We further hypothesized that good lip-readers should exhibit greater engagement of visual and auditory areas indexed by desynchronization of alpha-band activity over occipital and central scalp locations, respectively. Our findings showed that the potency of the McGurk illusion did not correlate with lip-reading ability. Furthermore, contrary to our hypothesis, we found that when compared to poor lip-readers, good lip-readers exhibited synchronization of alpha activity over parietal-occipital sites. The alpha results are indicative of reduced engagement of attentional and visual networks and hence reduced cognitive effort in good lip-readers. In short, our findings do not support the premise that lip-reading ability is associated with more robust AV integration, rather they support the hypothesis that good lip-reading ability is associated with reduced attentional demands during visual speech perception.
Hide abstract
P1.43 Audiovisual Integration of Consonant Clusters
Andersen, T.S. &Gil-Carvajal, J-C. Technical University of Denmark
Show abstract
Seeing incongruent visual speech can alter the auditory phonetic percept. In the McGurk fusion illusion the auditory percept is a single consonant different from both the acoustic and the visual consonant. In the McGurk combination illusion the auditory percept contains both consonants. It remains unclear why some audiovisual stimuli elicit combination illusions. It is also unexplored how actual consonant combinations integrate audiovisually. Here we investigate the integration of audiovisually congruent and incongruent combinations of /aba/, /aga/, /ada/, /abga/, and /abda/. We found that visual stimuli containing a bilabial component (/aba/, /abga/ and /abda/) all facilitated perception of both acoustic consonant clusters regardless of audiovisual congruence. This is surprising because incongruent visual stimuli usually lead to illusory, hence incorrect, responses. The effect was most likely caused by the visual bilabial closure as we found a general increase in bilabial responses. Visual consonant clusters also produced combination illusions for auditory /aga/ and /ada/ and these responses were similar to the combination illusion induced by visual /aba/. The velar and alveolar components of visual consonant clusters did, however, also have an effect on auditory perception as they influenced perception of auditory /aba/ in inducing novel combination illusions where subjects perceived /abda/. Acoustic consonant clusters dubbed onto visual velar or alveolar stimuli created novel illusions. For example, acoustic dubbed /abga/ dubbed onto visual /aga/ created an illusion of hearing /agda/ or /adga/. This illusion could be due to the acoustic /b/ and visual /g/ creating a fusion illusion of hearing /d/ while leaving perception of acoustic /g/ unaffected. This indicates that opening, closing and release stages of consonants can integrate differentially. We hypothesise that this may explain why some audiovisual combinations produce combination illusions while others produce fusion or visual dominance illusions.
Hide abstract
P1.44 Vision dominates audition in adults but not children: Adults have a lower threshold for the McGurk effect in audio-visual noise
Hirst, R.J., Stacey, J., Cragg, L., Stacey, P.C. & Allen, H.A. University of Nottingham
Show abstract
Across development, humans show an increasing reliance upon vision, such that vision increasingly drives audio-visual perception. This is evidenced in illusions such as the McGurk effect, in which a seen mouth movement changes the perceived sound. The current paper assesses the effects of manipulating the heard and seen signal by adding auditory and visual noise to McGurk stimuli in children aged 3 to 12 years (n=90) and adults aged 20 to 35 years (n=32). Auditory noise increased the likelihood of vision changing auditory perception. Visual noise reduced the likelihood of vision changing auditory perception. Based upon a proposed developmental shift from auditory to visual dominance we predicted that children would be less susceptible to the McGurk effect, and that adults would show the effect in higher levels of visual noise and with less auditory noise compared with children. We found that susceptibility to the McGurk effect increased with development and was higher in adults than children. Children required more auditory noise than adults to induce McGurk responses and less visual noise to reduce McGurk responses (i.e. adults and older children were more easily influenced by vision). Reduced susceptibility in childhood supports the theory that sensory dominance shifts across development.
Hide abstract
P1.45 INTEGRATION OF SMELL AND TASTE: EEG study of brain mechanisms allowing the enhancement of saltiness with aroma
Sinding, C., Thibault, H. & Thomas-Danguin T. Centre des Sciences du Goût et de l’Alimentation, AgroSup Dijon, CNRS, INRA, Université Bourgogne Franche-Comté, F-21000 Dijon, France.
Show abstract
Odors have the natural property to induce a taste (odor-induced taste enhancement, OITE). Yet odors and taste are perceived through independent senses, which never interact but in the brain. OITE processes are mostly unconscious, but decisive in the pleasure of food. Taste and Smell may interact at different levels of the integration process. The main theory is that the configural pattern of activation is stored in high integration cortices or memory areas and needs to be reactivated in order to induce taste perception, through top-down processes. However, latest findings in rats, showed that early connections between gustatory and olfactory cortices enabled the activation of secondary olfactory cortex (piriform cortex), when rats were stimulated with sugar solution. We wanted here to test these hypotheses in human. We examined the brain chronometry of taste and smell integration with a simple 5 electrodes EEG system, in association with a high time resolution gustometer. We used close to real products, a green-pea soup, two levels of salt “usual” and “reduced” (-25% salt), and an aroma “beef stock”. The idea was to compare the soup usually salted (S.usu), and the soup with a reduced level of salt (S.red), with the soup containing a reduced level of salt but a beef stock aroma (S.red.A). The stimulation consisted in 60µl of one solution sprayed as a thin drizzle on the tongue during 400ms (repeated 40 times interleaved by 16 to 20 s water stimulations). As a result, we identified two late pics, N2 and P3, which appeared only in the salty solutions and not in the controls (soup alone and soup with aroma). The differential amplitude N2P3 and for the S.red.A solution was higher as compared to the S.red. Finally the latency of N2P3 was higher for S.red.A solution as compared to S.usu. As the effects are found in late components of the event related potential, these results seem to confirm the main theory, that aroma may affect taste through the activation of the flavor memory in high integration cortices.
Hide abstract
P1.46 Shapes associated with emotion can influence product taste expectations
Orejarena, M.C., Salgado-Montejo, A., Salgado, R.. Betancur, M.I., Velasco, C., Salgado, C.J. & Spence, C. Universidad de La Sabana, Center for Multisensory Marketing BI Norwegian Business School, Neurosketch Colombia, Crossmodal Research Lab University of Oxford
Show abstract
In recent years, there has been a steady interest in unearthing the relation between visual features with both an emotional valence and gustatory tastes. Different studies have demonstrated that visual features such as roundness/angularity, symmetry/asymmetry, and the number of elements can be associated with both an emotional valence and basic tastes (sweet or sour). There is increasing evidence that simple geometric shapes that resemble facial features can be associated with a valence and with an emotion. What is more, there is research showing that experiencing a gustatory taste is generally accompanied by a facial expression. However, there are no studies that have probed as to whether geometric shapes that resemble facial expressions of taste can be matched to basic tastes. This study explores whether shapes that resemble facial expressions influence taste expectations in the context of product packaging. The results indicate that shapes that resemble eyes and mouth-like configurations can be matched to different basic tastes (i.e., sweet, sour, and bitter). We found that the product category has an important influence on the degree in which each of the face-like features influence taste expectations. The present study suggests that low-level visual features may be involved in capturing meaning from facial expressions and opens the possibility that simple face-like features may be used in applied contexts to communicate basic tastes. Our findings hint towards an embodied mechanism for at least some shape-taste associations.
Hide abstract
P1.47 Do Gustatory Global-Local Processing Styles Prime Vision?
Karademas, C. & List, A. Hamilton College
Show abstract
When we perceive sensory information, we can concentrate on either the details or the whole of the object or experience, taking a local or global processing style. We can adopt these processing styles in all five senses. Addressing how adopting a processing style in one modality influenced processing of another modality, J. Fӧrster (2011) reported an extensive series of studies pairing gustatory, olfactory, auditory and tactile senses with vision. Though he reported bi-directional processing style priming between all the pairings he tested, his paper was later retracted based on statistical analyses conducted during an institutionally-driven investigation. Without taking a position on his data, we have instead conducted an independent methodological replication of two of his experiments examining gustatory global-local priming on vision. As in Fӧrster’s (2011) reported studies, in one study, we instructed participants to focus on either the details or the whole (goal-driven) or, in a second study, we manipulated the stimuli to promote a local or global focus (stimulus-driven). In both studies, participants first performed a “gustation” task (more accurately described as an eating task because participants derived gustatory, olfactory, haptic and auditory information). We measured whether they subsequently adopted a more global or local processing style during an ambiguous visual matching task. Contrary to Fӧrster’s (2011) findings, gustatory global or local focus, whether goal- or stimulus-directed, did not have an effect on visual processing in either study. The current studies not only enhance our understanding of the limits of cross-modal priming, but also contribute more broadly to scientific self-correction through independent research replication.
Hide abstract
P1.48 Psychological effects induced multimodally by the aroma and the color of bottles
Okuda, S., Takemura, A., & Okajima, K. Doshisha Women’s College of Liberal Arts
Show abstract
This study aims to clarify how aroma and color of bottles induce multimodally some psychological effects. We prepared six kinds of essences, lavender, lemon grass, cypress, damask rose, spearmint and bergamot as aroma stimuli. Each diluted essence was dropped into a small bottle rapped with one of six kinds of color label, red, orange, yellow, green, blue and purple. We conducted three kinds of subjective experiments. In the visual experiment, participants observed one of the bottles without olfactory stimulus. In the olfactory experiment, they smelled one of the essences with no visual stimulus. In the visual-olfactory experiment, they observed one of the bottles while smelling one of the essences. Participants evaluated four types of psychological effe c t s , a c t i v e , r e f r e s h i n g , p o s i t i v e a n d r e l a x i n g e f f e c t s u s i n g n u m e r i c a l s c a l e s f r o m 0 t o 1 0 . T w e n t y p a r t i c i p a n t s w e r e a l l f e m a l e i n t h e i r t w e n t i e s , a n d t h e y w e r e s c r e e n e d u s i n g t h e I s h i h a r a c o l o r v i s i o n t e s t a n d t h e T & T o l f a c t o r y t e s t . R e s u l t s o f t h e v i s u a l e x p e r i m e n t s h o w e d t h a t t h e r e d a n d o r a n g e b o t t l e s c a u s e d a c t i v e a n d p o s i t i v e i m p r e s s i o n s w h e r e a s t h e g r e e n b o t t l e c a u s e d r e f r e s h i n g a n d r e l a x i n g e f f e c t s . O n t h e o t h e r h a n d , r e s u l t s o f t h e v i s u a l – o l f a c t o r y e x p e r i m e n t i n d i c a t e d t h a t t h e h i g h e s t a c t i v e e f f e c t w a s
Hide abstract
P1.49 Heart rate and skin conductance responses during assimilation and contrast of different juice samples
Verastegui-Tena, L.M., van Trijp, H. & Piqueras-Fiszman, B. Wageningen University and Research
Show abstract
Disconfirmations between consumers’ expectations and a product can lead to different processes such as assimilation and contrast . When studied, however, it could be beneficial to have a broader approach into the effects of the disconfirmation of expectations in these processes. For example, food research could benefit from looking at consumers’ physiological responses, such as those of the autonomic nervous system (ANS) to understand their initial reactions during these processes. This study evaluated how ANS responses change during assimilation and contrast and whether these responses differ to those obtained when there is no manipulation of expectations.
Eighty-six participants tasted fruit and vegetable juices in two separate sessions. They were divided in two conditions. In the first, expectations were manipulated by showing participants the image of an ingredient and then providing them with juices whose flavours were made congruent, slightly incongruent and largely incongruent to that of the image. In the second condition, the juices were tasted blindly and the image of the ingredient was shown after tasting. Heart rate and skin conductance were measured. To confirm that assimilation, and contrast was experienced, participants rated the samples in different sensory properties before and after tasting each sample. Results showed that most of the sensory ratings, except for that of sourness and taste intensity, showed that there was assimilation and contrast. Heart rate changes were related to whether it was the participants’ first or second session doing the study while skin conductance changed according to whether the samples were tasted blindly or not. In conclusion, while our design managed to create situations of assimilation and contrast, ANS responses did not capture factors related to these processes but rather other factors that could be, for example, related to attention and the orientation response.
Hide abstract
P1.50 The homunculus: grounding cognition
Forster, B. & Calvo-Merino, B. City, University of London
Show abstract
Approaches to embodied cognition have shown that language and mental transformations can be grounded in body experiences. These approaches emphasise the link between cognition and the motor system, while we have recently shown the involvement of the somatosensory system in visual tasks involving affective judgments or memory of body images (Sel et al., 2014; GalvezPol et al., 2018). Furthermore, we now show that attentional selection can also recruit additional somatosensory areas in a visual search task. Participants were asked to detect either a certain colour or hand posture amongst several hand images. We analysed visual ERPs evoked by the onset of the visual stimulus display and found the N2pc component over visual cortex reflecting attentional target selection processes. In addition, on half of the trials somatosensory ERPs were elicited by task irrelevant tactile probes presented simultaneous with the visual onset. We isolated somatosensory activity by subtracting visual-only from tactile probe trials. Importantly, only when selecting for posture, but not for colour, the N140cc was present confirming attentional recruitment of somatosensory cortex. Our findings show that embodiment in visual search is not automatic when seeing body images but rather task dependent; and further, our findings extend current assumptions of sensory specificity of attention including the sensory modality perceiving the stimuli and also functionally relevant sensory cortex. Taken together, our findings reveal the distinct role of the homunculus in grounding cognition beyond sensory processes.
Hide abstract
P1.51 More than skin-deep: Integration of skin-based and musculo-skeletal reference frames in localisation of touch
Sadibolova, R., Tamè, L. & Longo, M.R. Birkbeck, University of London
Show abstract
The skin of the forearm is, in one sense, a flat 2D sheet, but in another sense approximately cylindrical, mirroring the volumetric shape of the arm. The role of frames of reference based on the skin as a 2D sheet versus based on the 3D muskulo-skeletal structure of the arm remains unclear. When we rotate the forearm from a pronated to a supinated posture, skin on its surface is displaced. Thus, a marked location will slide with the skin across the underlying flesh, and the touch perceived at this location should follow this displacement if it is localised within a skin-based reference frame. We investigated, however, if the perceived tactile locations were also affected by the rearrangement in underlying musculo-skeletal structure, i.e. displaced medially and laterally on a pronated and supinated forearm, respectively. Participants pointed to perceived touches (Experiment 1), or marked them on a three-dimensional size-matched forearm on a computer screen (Experiment 2). The perceived locations were indeed displaced medially after forearm pronation in both response modalities. This misperception was reduced (Experiment 1), or absent altogether (Experiment 2) in the supinated posture when the actual stimulus grid moved laterally with the displaced skin. The grid was perceptually stretched at medial-lateral axis, and it was displaced distally, which suggest the influence of skin-based factors. Our study extends the tactile localisation literature focused on the skin-based reference frame and on the effects of spatial positions of body parts by implicating the musculo-skeletal reference frame in localisation of touch on the body.
Hide abstract
P1.52 Vision enhances touch just before grasping an object
Juravle, G., Colino, F., Meleqi, X., Binsted, G. & Farnè, A. Impact Team, INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center, University Claude Bernard Lyon 1, Lyon, France
Show abstract
Tactile sensitivity measured on the hand is significantly decreased for a moving, as opposed to a resting hand, during the execution of goal-directed movements. This process, known as tactile suppression or gating, is affected by the availability of visual information. However, it is unclear at present whether the availability of visual information during action differentially modulates tactile sensitivity with respect to the different timings of a goal-directed reach-to-grasp movement, especially in what regards the crucial time period shortly before grasping and lifting an object. Here we investigated this question by having participants reach, grasp, and lift an object placed on the table in front of them, for conditions of full vision, or limited vision (movement executed in the dark), while probing for tactile sensitivity. For this, we utilized measures of signal detection theory (d primes and criterion c’). When present, tactile stimulation was a 2 ms square wave, which was thresholded in a pre-test at rest for a 90% detection level. Tactile stimulation could be delivered with equal probability at the moving or the resting hand, for one of the four different timings of stimulation: movement preparation, movement execution, before grasping, and while lifting the goal object. Our results indicate significant gating of tactile information at the moving, as compared to the resting hand. Importantly, sensitivity at the moving hand is clearly affected by the availability of visual information, for only the before grasp timing of stimulation: That is, tactile sensitivity is clearly enhanced when vision is available, as compared to the blind condition. These results are in line with the well-known visual preference for the index finger in reach-to-grasp tasks and demonstrate, for the first time, that vision also drives what is felt at the index finger when grasping an object.
Hide abstract
P1.53 Pompoms and white blocks should be light: Evidence of how we act upon weight expectations
Wilson, H., Walker, P. & Bremner, G. Lancaster University
Show abstract
Research has shown evidence of a brightness-weight correspondence in which people expect darker objects to be heavier than brighter objects (Walker, Francis, & Walker, 2010). The aim of these experiments was to confirm the presence of this correspondence through verbal measures; and also examine whether the correspondence is revealed through our interactions with objects.
In experiment 1, participants were asked to make weight judgements by vision alone, about identically weighted blocks which varied in terms of material (sand, pompom), which was thought to be a relatively obvious cue to weight, or brightness (grey, black, white). As expected, in the material trials, participants rated the pompom block as the least heavy and the sand block as the heaviest. In the brightness trials, the brighter block was rated as lighter in weight than the darker block. Using verbal measures, this confirmed the presence of a material-weight and brightness-weight correspondence.
Research has shown that people reach and transport objects differently based on their expected weight (Eastough & Edwards, 2007; Paulun, Gegenfurtner, Goodale, & Fleming, 2016). In experiment 2, participants were asked to lift a series of blocks (same stimuli as experiment 1), to examine whether there were differentiated kinematics for objects of different brightness. Material blocks were also included to see how kinematics vary for arguably a more obvious correspondence. Sand blocks were lifted significantly higher during transport than pompom blocks (p = .011), suggesting that more force was used to lift the ‘heavier’ block. Black blocks were approached with significantly greater maximum velocity than white blocks (p = .032). It is suggested this is evidence that more caution was taken with the ‘lighter’ block. This demonstrates early evidence that the brightness-weight, crossmodal correspondence is utilised in everyday interactions with objects.
Hide abstract
P1.54 Audiovisual Interactions in Primary Auditory Cortex of the Mongolian Gerbil (Meriones unguiculatus) Probed with Amplitude-Modulated Stimuli
Bremen, P. Department of Neuroscience, Erasmus MC, Rotterdam
Show abstract
The anatomical substrate of cortical and subcortical audiovisual connections in the Mongolian gerbil are well described. However, the functional characterization of audiovisual interactions in this species is largely missing.
To ameliorate this knowledge gap we recorded with silicone probes in primary auditory cortex of Ketamine/Xylazine anesthetized gerbils. We presented stimuli via two free-field speakers and speaker-mounted light-emitting diodes (LEDs) located at 60 deg contralateral/ipsilateral re. recording side (distance re. head: 107 cm). Auditory (noise) and visual (light) stimuli consisted of a 500-ms static part followed by a 500-ms amplitude-modulated part. The leading static part was present in all stimuli. In unimodal auditory (visual) stimuli only the noise (LED) was amplitude modulated while the LED (noise) remained static. In audiovisual stimuli both sound and LED were amplitude modulated. We systematically varied a) modulation frequency, b) modulation depth, c) the delay between modulation onset of sound and LED, d) LED color (red, green, blue), and e) LED location (contra/ipsi).
In congruence with the literature we found modulatory effects of visual stimulation in auditory cortex. We observed both facilitatory and suppressive interactions with congruent and incongruent modulation frequencies. The strongest audiovisual interactions occurred with small temporal delays (+/-100 ms). Audiovisual responses to amplitude modulation could lead or lag re. unimodal responses. Surprisingly, depending on the delay between sound and LED additional response peaks could arise which were absent in unimodal conditions. Furthermore, we found a positive correlation between audiovisual interactions and LED modulation depth. And, audiovisual interactions were diminished or absent with red-light or ipsilateral LED stimulation. All of these effects occurred in both hemispheres.
We conclude that a) the main principles of multisensory integration hold true for gerbil auditory cortex, b) amplitude-modulated sounds and lights are suitable stimuli for the study of audiovisual integration and may be useful surrogates for complex audiovisual speech stimuli.
Acknowledgments: This research is funded by the Department of Neuroscience, Erasmus MC, Rotterdam. We would like to thank Dr. Gerard Borst for providing funds and the infrastructure to perform these experiments. We are grateful to Dr. John van Opstal for generously gifting TDT recording hardware. Alex Brouwer is thanked for invaluable technical assistance and Ruurd Lof and Kees Donkersloot for assistance with electronics design and implementation.
Hide abstract
P1.55 Endogenous attention enhances neuronal signature of audio-visual sound-shape correspondence
Chow, H.M. & Ciaramitaro, V.C. University of Massachusetts Boston
Show abstract
Associations between abstract shapes and non-sense words, e.g., round shapes and /bouba/ sounds, have been observed across cultures and early in development. Yet, how automatic is this association and does attention influence such crossmodal correspondence? More specifically, does attending a sound enhance representation of the corresponding (congruent) shape feature naturally associated with this sound? We investigated the role of attention in sound-shape correspondence using steady state visual evoked potentials (SSVEPs) recorded by electroencephalography.
Participants viewed one spikey and one round shape, half a shape in each visual hemifield. Each shape flickered at one of two frequencies (5.45, 7.5Hz) under one of three auditory conditions: no sound, or a /ba/ or /ki/ sound repeated at 3Hz. Across blocks, endogenous attention was directed away from shapes and sounds (participants detected color changes at central fixation) or distributed uniformly across shapes and sounds as which stimulus would change was unpredictable (participants detected border thickness changes in shapes and volume reduction in sounds). We expected a feature-based attentional enhancement: enhanced neuronal processing of a visual shape (e.g., round shape) by a concurrently presented congruent sound (i.e. /ba/) and/or reduced processing by an incongruent sound (i.e. /ki/). We quantified neuronal processing by measuring the signal-to-noise ratio of the SSVEP at the fundamental frequencies (5.45 and 7.5Hz) of each visual shape.
Our results suggest an enhanced occipital SSVEP signal-to-noise ratio for a given shape by a congruent over incongruent sound, such that attending a sound enhances the corresponding visual shape in accord with sound-shape correspondences. Interestingly, such effects emerge when attention is directed towards sound and shape features but not when attention is directed away. Our results highlight that neuronal signatures of audio-visual sound-shape correspondence are influenced by endogenous feature-based attention, which may act globally across corresponding visual and auditory features.
Hide abstract
P1.56 Multisensory Responses in the Primary Auditory Cortex of the Cat
Boucher, C., Butler, B. & Lomber, S. G. University of Western Ontario
Show abstract
Core auditory cortex of the cat is comprised of primary auditory cortex (A1) and the anterior auditory field (AAF). Neurons in both fields respond strongly to acoustic stimuli and are tonotopically organized. In hearing animals, a small number of cells in AAF respond to tactile stimulation. Following early-onset hearing loss, a much larger proportion of neurons in AAF become responsive to tactile and/or visual stimulation, indicating that the crossmodal sensory reorganization is robust in this cortical area. Unfortunately, the results from similar studies of A1 neurons are not as clear. In hearing cats, studies do not show multisensory responses in A1 (Stewart & Starr, 1970; Rebillard et al., 1977; Kral et al., 2003). Furthermore, only one study has documented crossmodal plasticity in A1 following perinatal hearing loss (Rebillard et al., 1977), while others have not (Stewart & Starr, 1970; Kral et al., 2003). An important methodological consideration surrounding these studies involves whether the anesthetic used may have played a role in revealing crossmodal plasticity in deaf A1. Overall, studies that used ketamine or pentobarbital as the primary anesthetic were able to identify crossmodal plasticity in deaf A1, while studies utilizing halothane were not. Therefore, the purpose of this investigation was to examine whether crossmodal responses might be evident under ketamine. Here, we measure multisensory responses in A1 of hearing animals and examine the visual characteristics to which A1 maximally responds. These results will serve as a control for future studies that will examine the degree to which A1 undergoes crossmodal plasticity following perinatal deafness.
Hide abstract
P1.57 Hand distance modulates the electrophysiological correlates of target selection during a tactile search task
Ambron, E.A., Mas-Casadesús, A.M.C. & Gherri, E.G. University of Pennsylvania
Show abstract
This study investigated whether the N140cc ERP component, described as a possible electrophysiological marker of target selection in touch, was modulated by body posture. Participants performed a tactile search task in which they had to localise a tactile target, presented to the left or right hand, while a simultaneous distractor was delivered to the opposite hand. Importantly, the distance between target and distractor (hands separation) was manipulated in different experimental conditions (near vs. far hands). Results showed reduced errors and enhanced amplitudes of the late N140cc when the hands were far apart than in close proximity. This suggests that the competition between target and distractor is stronger when the hands are close together in the near condition, resulting in a degraded selection process. These findings confirm that the N140cc reflects target selection during the simultaneous presentation of competing stimuli and demonstrate for the first time that the attentional mechanisms indexed by this ERP component are based at least in part on postural representations of the body.
Hide abstract
P1.58 Networks supporting auditory-visual speech: evidence from invasive neural recordings in humans
Ahn E., Plass J., Rakochi A., Stacey W. & Brang D. University of Michigan
Show abstract
The presence of congruent visual lip movements with auditory speech improves speech perception in noisy environments, whereas incongruent lip movements (e.g., an auditory /BA/ and a visual /GA/) can alter the perceived content of speech. Speech-related visual cues (including lip movements) typically begin prior to the onset of auditory speech signals, enabling visual information to bias auditory processes. Prior research using fMRI indicates that phoneme information extracted from lip movements facilitates the processing of auditory speech signals through a network involving the posterior superior temporal sulcus. While fMRI is adept at examining large changes in local activity, it is relatively insensitive to other forms of neural communication, particularly those used in multisensory contexts such as phase-resetting of intrinsic oscillatory activity. Furthermore, fMRI lacks the temporal resolution needed to identify some time-varying aspects of network communication. To better understand the neural mechanisms through which lip articulations modulate auditory speech perception, we acquired intracranial electrocorticographic recordings from a large group of patients (n=15) during an auditory-visual speech perception task. Examining event-related potentials, low-frequency oscillatory activity, and measures of population spiking rates, we show that lip articulations relay information across a network involving posterior fusiform face areas and visual motion area MT to temporal auditory areas, modulating auditory processes before the onset of speech signals. These data are consistent with predictive coding models of perception, in which the visual lip movements prepare the auditory neurons in expectation of a specific oncoming phoneme, in order to facilitate perceptual processes.
Hide abstract
P1.59 Event-related brain potentials (ERPs) during peripheral and central visual field stimulation in the context of self-motion perception (vection)
Keshavarz, B., Haycock, B., Adler, J. & Berti, S. Toronto Rehabilitation Institute – University Health Network
Show abstract
The perception of self-motion can be induced by the sole stimulation of the visual sense in the absence of actual, physical movement (vection). The present study measured human event-related brain potentials (ERPs) to investigate the sensory processes underlying vection. We presented participants a visual stimulus consisting of alternating black-and-white vertical bars that moved in horizontal direction for a brief period (2.5s-3.5s). When presented for a longer duration, the stimulus created the sensation of circular vection about the yaw axis. The stimulus was presented on a screen that was divided into a central and a surrounding peripheral visual area. Both areas moved independently from each other, requiring an intra-visual integration of the peripheral and central stimulation. This resulted in four different movement patterns: (1) the peripheral and the central stimulus moved in the same direction, (2) in opposite directions, (3) the peripheral stimulus remained stationary while the central field moved, or (4) vice versa. The visual stimulus was varied with respect to the bars’ width (narrow vs. wide). Vection intensity and duration were verbally collected. In general, the visual stimulation elicited vection that varied with respect to intensity and duration (i.e., weakest and shortest vection with central stimulus moving and peripheral stimulus stationary). ERP results demonstrated that movement onset of the stimulation elicited parieto-occipital P2 and N2 components. The amplitudes of the ERP components differed significantly between the four movement patterns (irrespective of stimulus type), however, they did not fully represent the subjective vection ratings reported by the participants. We argue that the ERP findings reflect the early sensory processing stage that precedes and contributes to the subjective sensation of vection.
Hide abstract
P1.60 Disentangling processing speed-up versus true multisensory integration using Support Vector Machine method
Mercier M.R. & Cappe, C. CNRS
Show abstract
It is now recognized that multisensory integration starts early in the sequence of sensory processing. As a consequence, it introduces temporal difference in the dynamic of brain activation, making difficult the assessment of later multisensory integration effect(s). That is to evaluate if any later difference between multisensory and unisensory conditions is truly related to multisensory integration or simply a corollary of the early multisensory integration effect. To resolve this confound we propose here a new type of analysis based on Support Vector Machine.
Support Vector Machine method provides extremely powerful tools for analyzing complex and dense dataset. In neuroscience this approach has been largely employed in brain imaging, where it is often referred as Mutli-Variate Pattern Analysis. Recently several EEG and MEG studies have illustrated its relevance to decode brain activity in time, for instance to discriminate brain activations related to visual categories.
In the present research we present a new approach to portray multisensory integration processes. Based on the additive model, we use a linear classifier first trained on the sum of unisensory conditions and then tested on the multisensory condition. The output of the classifier reflects its performance to determine the amount of brain signal elicited in the multisensory condition which can be predicted by the additive model. Moreover a temporal generalization technique allows us to disentangle true multisensory effect from speeded/lagged effect when comparing multisensory condition to unisensory conditions.
We illustrate this new approach in an EEG experiment in which subjects had to identify unpredictable auditory and/or visual targets embedded within a stream of audiovisual noise. The results reveal two types of “multisensory effect”. One related to integration processes and another one accounting for the speed-up of cognitive processes. We further extend the relevance of this new approach in extracting signal related to decision process.
Hide abstract
P1.61 Visual Activation and Lateralized Area Prostriata Induced During a Perceived Trance Process by an Expert
DeSouza, J.F.X. & Rogerson, R. York University
Show abstract
Altered states of consciousness have been recorded worldwide since time immemorial. Although viewed until recently as cultural phenomenon, which defies diagnostic criteria (Huskinson 2010), neuroscientists are increasingly investigating changes in brain circuitry during trance processes. The authors used an fMRI to explore perception of a trance process through a case study with an experienced Isangoma (traditional South African healer) with the aim of exploring the BOLD signal in associated regions. Following a stimulus of music selected by the healer to induce trance, a 3T Siemens Tim Trio MRI scanner was used to acquire functional and anatomical images using a 32 channel head coil. The data using the General Linear model (GLM), based on her perceptions of when she reported experiencing trance showed positive BOLD activation in visual, auditory cortex in both hemispheres. Other brain regions that showed a tight correlation to her trance perception was the right parietal, right frontal and right area prostriata at (P<0.05, Bonf). The orbitofrontal cortex was most negatively correlated to the perception of trance and showed the largest difference of high compared to low trance perception. It is the culturally appropriate auditory stimulus which seems to trigger a trance process in the subject. In Hove’s et al (2015) comparative research of shamans in perceived trance, brain regions as anatomical seeds is evident. While in the author’s findings, a higher correlation of perceived trance in the subject in all areas (dACC, posterior ACC and their PPC regions) is visible. Unlike Hove et al however, the author’s show a strong correlation to the subject’s perceived trance and hope not only to exemplify correlations of trance perception but also to add to budding neuroscientific inquiry regarding brain circuitry and trance processes.
Hide abstract
P1.62 Parkinson’s Disease and Oscillatory Brain Rhythms: Putative EEG changes in Parkinson’s patients performing the sound induced double-flash illusion task before and after neurorehabilitation.
Cohan, R. & DeSouza, J.F.X. Department of Psychology, Centre for vision research, York University
Show abstract
A mounting body of evidence suggests that the prodromal and clinical symptoms of Parkinson’s disease (PD) such as impaired circadian rhythm, uncoordinated movements, and distortion in beat and time perceptions could be explained by the decrease in the dopamine-dependent oscillatory brain rhythms. Multiple studies have confirmed the role of decreased levels of global alpha frequency (8-14 Hz) as one of the main underlying neurophysiological causes of sub-optimal perception and movement in PD.
In the past few years novel neurorehabilitation interventions such as dance have shown a marked improvement in the post-intervention alpha power, emphasizing the importance of external multisensory cues for patients with PD (PwPD). In the case of dancing, the amalgamation of movement to the music, copying the instructor, and synchronizing movements with the partner, could all be the plausible mechanisms for the post therapy improvements.
There seems to be a correlation between dance, alpha frequencies and amelioration of symptoms, therefore, we hypothesized that PwPD should show an improved length of temporal window of perception post-intervention coupled with an increased alpha frequencies. Our team uses the sound-induced double flash illusion paradigm to test the temporal window of sensory integration during the coupling of sound and visual stimuli in two groups of PD and healthy age-matched controls before and after dance. A third group was also added to control for possible dopamine-replacement therapy (mainly L-dopa and Carbidopa) interference with alpha frequencies.
Acknowledgments: Special thanks to all the former and current members of JoeLab for their hard work and professor DeSouza for his support and guidance.
Hide abstract
P1.63 Short- and long-term evaluation of the effects of dance on people with Parkinson’s Disease.
Bearss, K. & DeSouza, J.F.X York University
Show abstract
Dance is a multi-dimensional physical exercise, involving widespread activity of different brain regions as well as demonstrates positive short-term benefits on motor function for people with Parkinson’s Disease (PwPD). Our current study examines the effects of dance training on both motor and non-motor symptoms, and correlates these effects to onsite recordings of rsEEG. METHODS. Short-term: 17 PwPD, mild-severity (MH&Y= 1.31, SD= 1.01) (Mage = 68.82, SD = 8.95, NMales = 12) and 19 healthy controls (HC) (Mage = 52.78, SD = 17.30, NMales = 6); and Long-term:16 PwPD, mild-severity (MH&Y= 1.25, SD= 0.86), (Mage= 68.73, SD= 8.41, NMales= 11, MDiseaseDuration= 5.54, SD= 4.52) were tested before and after dance class using the standardized MDS-UPDRS (I-IV), H&Y, PANAS-X, MMSE, PD-NMS and rsEEG over 3 years. RESULTS. Short-term: PwPD showed greater motor impairment in comparison to HC (p < .001, η2 = .714). Motor impairment improved after a dance class (p < .001, η2 = .479). A significant interaction was found between Condition (PRE vs POST) and Group (PD vs. HC), with motor improvement in PwPD following the dance class (p < .01, η2 = .479). Negative affect decreased after dance class for both PD and HC (p < .01, η2 = .503). Positive affect was higher after dance class (p < .01, η2 = .255). An interaction in positive affect scores between Condition and Group, where HC positive affect increased after the dance class (p < .025, η2 = .179). rsEEG global alpha power was highest after the dance class (p < .025, η2 = .210). Long-term: There is no motor impairment progression of PD across 3-years (p = .685). CONCLUSIONS. Results indicate the positive benefits of dance for motor, non-motor and neural changes in PwPD. These findings support the implementation of dance as a form of neurorehabilitation for PwPD.
Acknowledgments: We thank our current and past students and volunteers for their ongoing hard work and dedication to the JoeLab and the Dancing with Parkinson’s project.
Hide abstract
P1.64 A vestibular-gravitational contribution to perceived body weight
Ferrè, E.R., Frett, T., Haggard, P. & Longo, M.R. Royal Holloway University of London
Show abstract
The weightlessness experienced by astronauts has fascinated scientists and the public. On Earth, body weight is given by Newton’s laws as mass times gravitational acceleration. That is, an object’s weight is determined by the pull of gravity on it. We hypothesised that perceived body weight is – like actual weight – dependent on vestibular-gravitational signals. If so, changes in the experienced force of gravity should alter the experience of one’s own body weight. We asked participants to estimate the weight of two body parts, their hand or their head, both in normal terrestrial gravity and during exposure to experimentally altered gravitational fields, 0g and +1.8g during parabolic flight and +1g using a short arm human centrifuge. For both body parts, there was a clear increase in perceived weight during experience of hypergravity, and a decrease during experience of microgravity. Our results show that experimental alterations of gravity produce rapid changes in the perceived weight of specific individual body parts. Traditionally, research has focused on the social factors for weight perception, as in the putative role of mass media in eating disorders. In contrast, we emphasize that the perception of body weight is highly malleable, and shaped by immediate sensory signals.
Hide abstract
P1.65 Perceived timing of active head movements reduced with increased speed
Sachgau, C., Chung, W. & Barnett-Cowan, M. University of Waterloo
Show abstract
The central nervous system must determine which sensory events occur at the same time. Actively moving the head corresponds with large changes in the relationship between the observer and the environment, sensorimotor processing, and spatiotemporal perception. Numerous studies have shown that head movement onset must precede the onset of other sensory events in order to be perceived as simultaneous, indicating that head movement perception is slow. Active head movement perception has been shown to be slower than passive head movement perception and dependent on head movement velocity, where participants who move their head faster than other participants require the head to move even earlier than comparison stimuli to be perceived as simultaneous. These results suggest that head movement perception is slower (i.e., suppressed) when the head moves faster. The present study used a within-subjects design to measure the point of subjective simultaneity (PSS) between active head movement speeds and a comparison sound stimulus. Our results clearly show that i) head movement perception is faster when the head moves faster within-subjects, ii) active head movement onset must still precede the onset of other sensory events (Average PSS: -123 to -52 ms) in order to be perceived as occurring simultaneously even at the fastest speeds (Average peak velocity: 76°/s to 257°/s). We conclude that head movement perception is slow, but that this delay is minimized with increased speed. While we do not provide evidence against sensory suppression, which requires active versus passive head movement comparison, our results do rule out velocity-based suppression.
Hide abstract
P1.66 Is linear vection enhanced when perceived upright is orthogonal to gravitational upright?
McManus, M. & Harris, L.R. Centre For Vision Research, York University
Show abstract
When gravity cues are unavailable or become unreliable, visual information is weighted more strongly (Harris et al, 2017 Microgravity 3:3). If a conflict is introduced between the body, gravity, and visual cues to upright the reliability of non-visual cues may decrease and thus, since cues are weighted according to their reliability, enhance vision. Here we tested this hypothesis using the perceived travel distance induced by optic flow in the presence or absence of a conflict between visual and non-visual orientation cues.
Participants were tested standing, supine, or prone (thus varying the relationship between gravity and visual orientation cues) in either a structured visual environment aligned with their body or a star-field. During each trial a target was simulated in an Oculus Rift at between10 and 80m in front of them. The target was then removed, and participants were virtually accelerated towards the target’s previously seen location. They pressed a button when they reached the remembered target location. Experiments used a random block design. Following each block, participants’ perceived upright was assessed.
Pilot studies using the structured-vision condition found that in the supine and prone postures, participants experienced a visual reorientation illusion (VRI) such that they felt that they were upright and aligned with gravity even though they were physically orthogonal to gravity, indicating a dominance of vision. In this condition, participants in supine and prone postures needed to travel less far than in the upright condition to perceive they had traveled through the target distance.
We conclude that conditions of sensory conflict can increase reliance on vision. The star field condition will allow us to determine if this due to reweighting of sensory cues associated with a VRI.
Acknowledgments: LRH is supported by a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Canadian Space Agency. MM holds a research studentship from the NSERC CREATE program.
Hide abstract
P1.67 When in conflict, choose touch! A visuo-haptic, virtual reality investigation of conflicting shape information in object processing
Kang, H.M. Korea University, Brain and Cognitive engineering, Cognitive systems Lab
Show abstract
Several studies have investigated how vision and touch are integrated and whether one modality may be dominant. A study by Rock and Victor famously claimed that when visual and touch information are incongruent, the brain chooses the visual input, which they called “visual capture”. Here, we extend this research on multisensory integration by separating vision and touch in a virtual reality (VR) setup using parametrically-generated, novel 3D shapes. Observers see a shape in VR and touch a shape in the real world with the help of 3D-printed objects. The exploration is displayed in real-time via hand tracking in VR to increase immersion and believability. We use a simple shape similarity judgment task with multiple, interleaved staircases to investigate shape perception using congruent (visual and haptic shapes are the same) and incongruent conditions (two modalities differ). Two objects are presented subsequently and participants have to indicate whether they are same or different. Since the objects are parametrically-generated, we can vary both the difference between the first and the second object, as well as the difference between the visual and the haptic display. The staircases are used to find the parameter difference that results in a “same” response. 18 participants were recruited in each of three groups to test the influence of instruction on the similarity judgment: ‘no instruction’, ‘attend vision’ and ‘attend touch’. We found that congruent and incongruent conditions were significantly different in all three groups (all p<.001) – importantly the result showed that participants were biased towards haptic shape judgments, contradicting the earlier findings by Rock and Victor. Although there was a trend towards group differences, our results showed no significant difference in the amount of “haptic capture” with respect to instruction (F=2.345, p=.101). Overall, our findings show a surprisingly resistant haptic dominance in judging conflicting information on shape.
Hide abstract
P1.68 Vestibular signals modulate perceptual alternations in binocular rivalry from motion conflict
Keys, R.T., Paffen, C., MacDougall, H., Alais, D. & Verstraten, F.A.J. School of Psychology, University of Sydney
Show abstract
Visual and vestibular information are both informative about self-motion and recent work shows that vestibular signals can influence visual motion perception. Here we ask whether vestibular input can influence the dynamics of binocular rivalry created by opposed visual motions. In 64 s trials, 10 observers in a CKAS 6 degrees-of-freedom motion platform system (Hexapod) underwent sinusoidal yaw rotations that oscillated between ±15 degrees with a full cycle period of 4 seconds while viewing motion rivalry. Observers viewed left- and rightward moving gratings which were dichoptically presented via an Oculus head-mounted display, and continuously tracked their dominant visual motion percept while their head and eye movements were recorded. The rivalry tracking time-series were epoched into 4 s periods to line up with one cycle of self- motion and averaged to show the mean dominance percept for every position of the yaw-rotation cycle. Fitting a sinewave to the epoched data of each participant showed that rivalry dominance tended to correlate with the direction of yaw rotation. The group mean sine period was 3.88 s, indicating that the motion rivalry dynamics were entrained by the self-motion oscillation. Fitted sine amplitudes varied between observers from 0.04 to 0.31, relative to a maximum amplitude of 0.5. The phase of the sine fitted to the rivalry alternations was stable and tightly linked to the phase of yaw rotation. For 7/10 observers it was in-phase (the dominant motion matched the direction of self-motion), and for 3/10 it was in anti-phase (the dominant motion was opposite to the direction of self-motion). Control data showed that the same yaw rotation had no influence on motion rivalry dynamics between upwards and downwards directions. We conclude that vestibular signals from self-motion input to the visual system and can help resolve perceptual ambiguity from motion rivalry.
Hide abstract
P1.69 Illusions of self-motion perception in the visual and vestibular systems during cue conflict
Kirollos, R. & Herdman, C. M. Carleton University – Center for Visualization and Simulation
Show abstract
In most situations, the information received by the visual, vestibular and other sensory systems regarding our self-motion is consistent. However, there are circumstances in which the sensory systems receive conflicting self-motion information, causing disorientation and potentially motion sickness. Most research supports the notion that the visual system overrides other cues for deciding self-motion direction during sensory conflict. However, much of the research on self-motion has not isolated the unique contribution of the vestibular system. The present research examined whether the visual or the vestibular system dominates during cue conflict in deciding self-motion direction. Measures of perceived illusory speed, direction and duration were indexed using a device that participants rotated when they experienced self-motion. In Experiment 1, caloric stimulation was used to deliver cool air to the inner ear. This changed the fluid dynamic properties of the horizontal semi-circular canal, resulting in illusory self-rotation in the yaw axis. In Experiment 2, visual illusory self-rotation was induced in the yaw axis using a stimulus presented on a virtual reality headset. In a final experiment, participants received visual and vestibular cues to self-rotation simultaneously that signalled motion in opposite directions but that were of approximately equal perceived speed. Surprisingly, results indicated that participants relied on the direction of motion signalled by the vestibular cues during cue conflict as often as they relied upon visual cues. These results suggest that the vestibular system has an equally important role in deciding self-motion direction during cue conflict and that self-motion direction is not dominated by visual cues during cue conflict. Future research should focus on the use and development of more precise methods to stimulate the vestibular system to further uncover its contribution to self-motion perception.
Hide abstract
P1.70 Feeling the beat: An exploration into the neural correlates of somatosensory beat perception
Gilmore, S. & Russo, F. Ryerson University
Show abstract
Musical rhythms elicit a perception of a beat (or pulse) which in turn tends to elicit spontaneous motor synchronization (Repp & Su, 2013). Electroencephalography (EEG) measurement has revealed that endogenous neural oscillations dynamically entrain to beat frequencies of musical rhythms even in the absence of overt motor activity, providing a neurological marker for beat perception (Nozaradan, Peretz, Missal, & Mouraux, 2011). Although beat perception seems to show an auditory advantage, recent research suggests that rhythms presented through tactile stimulation of the skin can also elicit motor synchronization, albeit to isochronous rhythms only (Ammirante, Patel, & Russo, 2016). The current research passively exposes participants to simple and complex rhythms from auditory, tactile, and audio-tactile sources. In addition, following passive exposure all participants will complete an active sensorimotor synchronization task with the same stimuli. Fourier analysis of EEG recordings and timing precision of sensorimotor synchronizations will be compared across the different modality conditions. Data collection for this study is currently in progress. Results may provide evidence that informs best-practices regarding tactile perception of rhythm, as well as provide a broader understanding of the auditory advantage for beat perception. Finally, the results may lead to new insights regarding the potential for multimodal enhancement of beat perception.
Hide abstract
P1.71 The Development of Auditory–tactile Integration
Stanley, B., Chen, Y.C., Lewis, T.L., Maurer, D., & Shore, D.I. McMaster University
Show abstract
Adults form a single coherent percept of the environment by optimally integrating sensory signals from multiple modalities. However, this ability changes throughout childhood and into adolescence. Here we measured the developmental changes using the fission and fusion illusions. Fission occurs when a single stimulus (e.g., tap to the finger) is perceived as two when accompanied by two stimuli from another modality (e.g., auditory beeps); fusion occurs when two stimuli are perceived as one when accompanied by a single stimulus from another modality. Three groups of children (9-, 11-, and 13-year-olds) and adults were tested on both the tap illusion induced by sound and the sound illusion induced by tap. Participants reported how many taps (or sounds) they perceived while instructed to ignore the signals in the other modality. On each trial, either one or two taps (beeps) was accompanied by either 0, 1, or 2 beeps (taps). Congruent trials consisted of equal numbers of taps and beeps; incongruent trials consisted of combinations of stimuli to produce fission or fusion illusions. The magnitude of the illusions was calculated by subtracting the accuracy on incongruent trials from that on congruent trials. The results to date (N = 18–20/group) reveal three findings of interest. First, the magnitude of the fission illusion exceeded the magnitude of the fusion illusion in all age groups. Second, the tap illusion induced by sound was greater than the sound illusion induced by tap for all age groups tested. Third, the magnitude of fission for the tap illusion induced by sound tended to be larger in 9-year-olds, but similar in 11-year-olds as compared to adults. In contrast, there was no age-related difference observed for fission in the sound illusion induced by tap. Finally, the pattern of results was not completely adult-like until 11 years of age.
Hide abstract
P1.72 Decoding the sound of hand-object interactions in early somatosensory cortex
Bailey, K. M., Giordano, B. L., Kaas, A. & Smith, F. W. University of East Anglia
Show abstract
Neurons, even in earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Such connections provide one way for prior experience and the current context to shape the responses of early sensory areas. Recently we have shown that cross-modal connections from vision to primary somatosensory cortex (S1) transmit content-specific information about familiar but not unfamiliar visual object categories. In the present work, we investigated whether hearing sounds depicting familiar hand-object interactions would also trigger such activity in S1. In a rapid event-related fMRI experiment, right handed participants (N=10) listened to five exemplars from each of three categories of auditory stimuli: hand-object interactions (e.g. bouncing a ball), animal calls (e.g. dog barking), and pure tones (unfamiliar control). Participants listened attentively, and performed a one-back repetition counting task, which eliminated any need for a motor response during scanning. An independent finger-mapping localizer was completed afterwards, and used to define finger-sensitive voxels within anatomically drawn masks of the right and left post-central gyrus (rPCG and lPCG respectively). Multivariate pattern analysis revealed significant decoding of different hand-object interactions within bilateral PCG. Crucially, decoding accuracies were significantly higher for decoding hand-object interactions compared to both control categories in rPCG. In addition, decoding of pure tones was at chance in all analyses. These findings indicate that hearing sounds depicting familiar hand-object interactions elicit different patterns of activity within finger-sensitive voxels in S1, despite the complete absence of tactile stimulation. Thus cross-modal connections from audition to early somatosensory cortex transmit content specific information about familiar hand-action sounds. Our results are broadly consistent with Predictive Coding views of brain computation, which suggest that the key goal of even the earliest sensory areas is to use the current context to predict forthcoming stimulation.
Hide abstract
P1.73 Musical expertise weakens the cost of dividing attention
between vision and audition
Ciaramitaro, V.M., Chow, H.M., & Silva, N. University of Massachusetts Boston
Show abstract
Recently we found that dividing attention across sensory modalities in a bimodal dual-task can impair performance, decreasing auditory contrast sensitivity, under high versus low visual load (Ciaramitaro et al., 2017). Musical training involves concurrently attending two or more senses (e.g. reading musical scores and listening to sounds) and can weaken the cost of unimodal dual-task performance (Moradzadeh et al., 2015). Here we investigate if musical experience weakens the cost of bimodal dual-task performance.
Participants performed an audio-visual dual task containing two intervals of binaural white noise and a concurrent RSVP stream of letters at fixation. For the auditory task, participants reported which interval contained an amplitude modulated white noise, with modulation depth varying across trials. For the visual task, participants judged which interval contained white letters (easy visual task) or a greater number of the target letter ‘A’ (difficult visual task). We measured auditory contrast sensitivity by fitting auditory data with a Weibull function to determine auditory thresholds. To quantify the cost of crossmodal attention we compared visual accuracy and auditory thresholds across easy and hard visual conditions. We expected a smaller cost on auditory performance from a competing harder versus easier visual task in musicians (n=28) compared to non-musicians (n=16). Individuals were classified as musicians if they met 2 (amateur; n=16) or 3 (experienced; n=12) criteria: at least 10 years musical training, training onset by 8 years of age, or younger, 15 hours practice per week, on average.
We found a smaller cost of divided crossmodal attention for musicians versus non-musicians. However, only male, not female, musicians showed enhanced auditory processing, smaller auditory contrast sensitivity differences for high versus low visual load, with concurrent weaker or no differences in visual performance. Some gender differences may reflect musical competence differences of our select sample not specified in our criteria.
Jensen, A., Merz, S., Spence, C., & Frings, C. University Trier
Show abstract
In daily life, signals from different sensory modalities are integrated in order to en-hance multisensory perception. However, an important, yet currently still controversial, topic concerns the need for attention in this integration process. To investigate the role of attention we turned to multisensory distractor processing. Note that multisensory target processing is typically confounded with attention as people attend to the stimuli that they have to respond to. We designed a multisensory flanker task in which the target and distractor stimuli were both multisensory and the congruency of the features (auditory and visual) was varied orthog-onally. In addition, we manipulated participants’ focus of view. Distractor congruency effects were modulated by this manipulation. When the distractor was fixated, congruency effects of both feature dimensions interacted, while congruency effects were independent when the dis-tractor was presented laterally. These results suggest that distractors presented laterally were processed at the level of features whereas distractors presented centrally (at fixation) were processed as feature compounds (i.e., objects). Multisensory integration of irrelevant stimuli is thus dependent on spatial attention.
Hide abstract
P1.75 Attentional modulation of multisensory event perception in a voluntary reaching movement
Loria, T., Tanaka, K., Tremblay, L., & Watanabe, K. Faculty of Kinesiology and Physical Eduction, University of Toronto and Faculty of Science and Engineering, Waseda University
Show abstract
Previous studies reported conflicting evidence for the hypothesis that attention influences multisensory integration (i.e., Helbig & Ernst, 2008; Talsma et al., 2010). The current study probed whether spatial-attention at the onset of a voluntary reaching movement would influence the processing and integration of task-irrelevant audio-visual stimuli. The participant’s primary task was to point/reach towards one of three rectangles displayed on a touch screen monitor. At movement onset, secondary stimuli consisting of one flash (F) were presented with one or two beeps (B), that included unimodal (1F0B), congruent (1F1B), and incongruent (1F2B) conditions. After each trial, the participants reported the number of flashes, which revealed a fission illusion in the 1F2B condition (Shams et al., 2000). The secondary stimuli were deemed to be attended (i.e., within the target rectangle) or unattended (i.e., in one of the two other rectangles). Reaching movements could be towards any of the three rectangles. Accuracy in the unimodal (1F0B) and bimodal congruent conditions (1F1B) was lower when presented within the unattended vs. attended rectangle. Also, the strength of the fission illusion was reduced at the unattended compared to the attended rectangle. An increased distance between the secondary stimuli and the attended rectangle influenced response accuracy. Indeed, both the accuracy in the unimodal and congruent conditions as well as the magnitude of the fission illusion decreased in the unattended-far compared to both the unattended-close and attended rectangles. Altogether, the results indicate a reduced perception of sensory events as well as reduced evidence of multisensory integration at unattended locations when initiating a voluntary reaching movement.
Acknowledgments: JSPS KAKENHI (JP17H00753); Japan Science and Technology Agency CREST (JPMJCR14E4); Natural Sciences and Engineering Research Council of Canada
Hide abstract
P1.76 Self-produced walking sounds change body-representation: An investigation on individual differences and potential positive impact on physical activity
Tajadura-Jiménez, A., Zhang, L., Newbold, J., Rick, P. & Bianchi-Berthouze, N. Universidad Carlos III de Madrid & University College London
Show abstract
Auditory contributions to mental body-representations, and the subsequent impact on behaviour and bodily feelings, remain largely unexplored. Our studies have demonstrated changes in body-representation induced by sounds paired with bodily actions. We recently showed that the real-time alteration of sounds produced by people walking on a flat surface, so that sounds are consistent with those produced by a lighter vs. heavier body, can lead people to represent their bodies as thinner/lighter, feel happier and walk with more dynamic swings and shorter heel strikes. In the present study we investigated whether this sound-driven bodily-illusion varies according to individual differences (body weight, gender, fitness, body perceptions/aspirations), and tested the potential of this illusion to facilitate more demanding physical activity. We asked participants to use a gym step (Experiment 1, N=37) or climb stairs (Experiment 2, N=22) under three real-time sound manipulations of the walking sounds differing in frequency spectra. We measured changes in body-representation with a body visualizer tool, by monitoring gait, and with a questionnaire on bodily feelings. We replicated previous results that participants represented their bodies as thinner in the high frequency “light” sound condition, with associated changes in gait (applied force, stance time, acceleration, cadence) and bodily feelings (feeling quicker, lighter, feminine, finding exercise easier). The effects of sound on visualized body size interacted with those of participant’s actual body weight and aspirations to be more masculine, but not reported body fitness or gender. The effects of sound on gait and feelings of being quick, light and finding easy/tiring to exercise interacted with those of participant’s actual weight and body fitness. We also showed that the effects do not hold once the altered sound feedback was removed. We discuss these results in terms of malleability of body-representations and highlight the potential opportunities for enhancing people’s adherence to physical activity.
Acknowledgments: AT was supported by the ESRC grant ES/K001477/1 (“The hearing body”) and by RYC-2014–15421 and PSI2016-79004-R (“MAGIC SHOES: Changing sedentary lifestyles by altering mental body-representation using sensory feedback”; AEI/FEDER, UE), Ministerio de EconomÃa, Industria y Competitividad of Spain. JN and NB were supported by the EPSRC EP/ H017178/ 1 grant (“Pain rehabilitation: E/ Motion-based automated coaching”). We thank Yvette Garfen for her assistance with data collection and Cintia Pechamiel Jiménez for her assistance with the gait analysis.
Hide abstract
P1.77 Neural circuits for visual, auditory and multisensory decision making in rats
Chartarifsky, L., Pisupati, S. & Churchland A.K. Cold Spring Harbor Laboratory
Show abstract
Decision-making requires assembling information from diverse sources. Existing work has begun to uncover individual areas supporting this process, but structures are usually probed using sensory signals from only one modality. Therefore, little is known about whether common versus independent circuits support decisions about, e.g., auditory vs. visual signals. Here, we aimed to determine whether there are circuits common to decisions about different sensory modalities, focusing on secondary motor cortex (FOF) and posterior striatum (pStr). FOF, a cortical area, is implicated in auditory decision-making, but little is known about its role in visual or multisensory decisions. pStr is implicated in motivation and action initiation but little is known about its role in decision-making. We trained freely-moving rats to report whether the underlying rate of a visual, auditory or multisensory stimulus was higher or lower than an abstract category boundary. Unilateral muscimol inactivation of FOF increased the overall guessing probabilities and biased rats’ decisions towards the inactivated side on visual and auditory trials. Similarly, unilateral inactivation of pStr biased choices towards the inactivated side on visual and auditory trials, however, the overall guessing probability did not change. Preliminary analyses suggest that pStr, but not FOF, inactivation affected rats’ optimal integration on multisensory trials. Changes in movement time to the left vs. right reward port were small and idiosyncratic across animals and sites, arguing that the observed effects were not due to a muscimol-induced motor impairment. Taken together, these results argue that FOF and pStr are part of a circuit common to decisions about multiple sensory modalities, and each area contributes differently to this process. Specifically, we suggest that FOF has a post decisional role, while pStr has a role in linking sensation to action.
Hide abstract
P1.78 Auditory-visual Integration during the attentional blink: an event-related potential study
Ching, A., Kim, J. & Davis, C. Western Sydney University
Show abstract
To investigate the role of attention in the integration of visual and auditory information, we used event-related potentials (ERPs) to examine integration processes in the context of the attentional blink. The attentional blink refers to an impairment in detecting a second target (T2) when it appears shortly after an initial one (T1) within a rapid serial presentation stream. We recorded and extracted ERPs following the presentation of audiovisual (AV), visual (V), and auditory (A), T2s in audio-visual presentation streams, which were presented during or after the attentional blink period (200-300 ms or 600-700ms after the onset of T1 respectively). AV Integration processes were quantified as the difference between the audiovisual ERP (AV) and the sum of the separate visual and auditory ERPs (A+V). The results showed that AV and A+V responses were more similar during the attentional blink than outside of it, suggesting that, during the attentional blink, AV integration was suppressed and visual and auditory information processed independently. AV integration (the difference between AV and A+V ERPs) occurred both before and during the time window of the P3 ERP component (300-500 ms), which is well-established as the earliest time window for attentional blink ERP effects. The fact that the attentional blink – which is thought to reflect a late-stage information bottleneck – influences AV integration at early latencies suggests the action of top-down feedback mechanisms, and points to the existence of attentional blink effects that might not be observable in a unisensory paradigm.
Hide abstract
P1.79 The role of context in models of multisensory decision-making
Liu, Y., & Otto, T. University of St Andrews
Show abstract
Multisensory decisions are typically faster and more accurate than unisensory decisions. To understand the underlying processes, models of multisensory decision-making are typically fed with the behavioral performance as measured with the unisensory component signals individually. Critically, by doing so, the approach makes the so-called context invariance assumption, which states that the processing of a signal is the same whether presented in a uni- or multisensory context. However, context invariance is not necessarily true, which presents a major pitfall for any argument based on such models. As it is difficult to test context invariance directly, here, our approach is to evaluate two related assumptions that are testable. First, we considered the role of ‘stimulus context’ in unisensory decisions. We compared performance in a unisensory task in trials that either included a task-irrelevant signal in another modality, or not. We found that performance was faster but less sensitive in trials with irrelevant signals added. Hence, given this speed-accuracy tradeoff, stimulus context invariance was violated. Second, we considered the role of ‘instruction context’ in unisensory decisions. We presented a random trial sequences that included both auditory, visual, and combined signals. We compared performance with unisensory signals when subjects were instructed to detect targets either from only one (unisensory) or from both modalities (multisensory instruction). We found that performance was slower and with increased miss rates in multi- compared to unisensory instructions. Further, we found that the deteriorated performance was largely due to increased modality switch costs in multisensory instructions. Hence, instruction context invariance did not hold either. As both related assumptions are clearly violated, it is difficult to understand how the often hidden context invariance can be assumed true without testing. We conclude that models of multisensory decision making have to critically consider the context invariance assumption.
Hide abstract
P1.80 Your perceived finger orientation depends on whether you move it yourself
Fraser, L. E. & Harris, L. R. Centre for Vision Research, York University
Show abstract
Perception of finger orientation in the absence of vision is biased in right-handers (Fraser & Harris 2016; 2017). Here we compared perception of finger orientation during passive or active finger rotation. We hypothesized that the presence of an efference copy of the finger’s movement would lead to a more precise, more accurate perception of finger orientation, compared to when the finger was passively moved.
Thirty-three right-handed participants sat with their left or right index finger placed in a slot mounted on a motor that rotated the finger, palm down, about the proximal interphalangeal joint (passive condition) or allowed them to rotate their own finger around the same axis (active condition). A horizontal mirror obscured their hand and reflected images from a monitor above. In the passive condition, the participant’s finger was rotated through three “distractor” orientations to a “test” orientation; they reported perceived finger orientation by rotating a line on the screen to match their finger’s orientation. In the active condition, participants rotated their finger to match three “distractor” lines, followed by a “test” line. Tested orientations were between 30° inward to 30° outward in 10° steps, with eight repetitions of each orientation.
The left hand index finger was judged as rotated more inward than the right. Active and passive accuracy was comparable for the left index finger, but active matching elicited significantly greater outward error than passive for the right finger. Precision of responses was better for the right hand compared to the left, and in the active compared to the passive task.
Our findings are consistent with research showing hand and finger orientation is systematically mislocalized in the absence of vision. Results suggest interplay between the functional specialization of the hands in right-handers, and the influence of efference copy on finger orientation perception.
Hide abstract
P1.81 Visuo-tactile Coherency of Self-generated Action via Surrogate Robot Affects Operator’s Bodily Self-location
Inoue, Y., Yamazaki, K., Saraiji, M. Y., Kato, F., & Tachi, S. The University of Tokyo
Show abstract
A surrogate robot, which has many kinds of sensors to transmit the remote environment to the operator and moves like human to replicate operator’s motion, is necessary for telexistence. During teleoperation using the surrogate robot, operator can experiences the environment via the robot’s sensor as if he/she is actually in there, and interact with real objects as if he/she has robot’s body. However, there are still uncertain issues regarding sensory integration in the situation of telexistence, in particular the relationship visuo-tactile coherency and bodily consciousness. To investigate these, we developed an experimental telexistence system which allow subject to watch his/her own body from different view and spuriously contact it like self-touch using robot’s hand either with or without tactile feedback, and conducted an behavioral experiment to evaluate the change in subjective self-location during self-touch operation. Result shows that tactile feedback from contacting position (hand) enhances the impression of getting in the robot whereas tactile feedback from contacted position (back) slightly reminds the original place where own body is on, suggesting that visuo-tactile coherency of self-generated action affects integrated feeling to the surrogate robot in telexistence.
Hide abstract
P1.82 Changes in hand localization are influenced by proprioception and prediction
Ruttle, J., ‘t Hart, B.M. & Henriques, D.Y.P. York University, Center for Vision Research
Show abstract
The ability to make accurate and precise goal-directed movements is based on how well we can estimate the position and motion of our limbs. These estimates rely on vision, proprioception, as well as efference-based predictions. We measure the plasticity of proprioceptive and efferent-based estimates of the hand using a series of visuomotor adaptation experiments, which involve reaching with a cursor which is rotated. We test two things. First, we assess how quickly proprioception and prediction change with altered visual feedback of the hand by measuring hand localization (of the right hand) after each training trial. Second, we can fit the multi-rate model of motor learning to the reaches, to see if any of the processes in the model predict changes in hand localization (Smith et al., 2006). To measure these changes in hand localization, participants estimated the location of the unseen hand when it is moved by the robot (passive localization) and when they generate their own movement (active localization). By comparing the differences between these hand estimates after passive (only proprioception) and active (both proprioception and efferent-based prediction) movements, we are able to tease out a measure of predicted sensory consequences following visuomotor adaptation. The trial-by-trial data suggest that proprioception recalibrates extremely fast. AIC analysis shows that this does not follow the time course of either the slow or fast process of the reach model (both p<.0008). Upon preliminary analysis prediction appears to more closely match the slow process, further investigation is required to confirm it follows the same time course. In addition, changes in prediction do not emerge nearly as fast as those for proprioception, although these efferent-based estimates continue to change with further training. These results suggest vision recalibrates both these two sources of information about hand position, proprioception and prediction, but in way that is independent.
Hide abstract
P1.83 Does Auditory-motor learning improve discrimination ability?
Endo, N., Mochida, T., Ijiri, T. & Nakazawa, K. Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo
Show abstract
Musical expertise and musical training affect auditory performance, possibly due to cognitive change, auditory learning, and auditory-motor association learning. In particular, however, to what extent auditory-motor association learning affects auditory performance has not been fully investigated. In this study, we examined whether the ability of pitch discrimination was improved by learning to associate the tone pitch with finger-gripping action.
Twenty four participants performed a discrimination task on tone stimuli having rising pitch contours. Auditory-motor learning group (AM group, n = 12) performed an auditory-motor association learning along with the discrimination task, while auditory learning group (A group, n = 12) did discrimination learning only. During the association learning, AM group manipulated a device which generates a tone whose pitch varies depending on the finger-gripping force. The duration of the experiment was 4 days, consisting of 8 blocks learning phase (2 blocks per day) and 2 blocks discrimination test phase such as pre-test (before the first block of learning phase) and post-test (after the last block of learning phase).
Differential limens (DLs) in pre-test and post-test in each group were compared by two-way ANOVA with test phase as within-subject factor and learning group as between-subject factor. The main effect of test phase was significant, while the main effect of learning group and the test phasexlearning group interaction were not significant. The results indicated that the auditory-motor learning and auditory-only learning equally improved the pitch discrimination performance.
Block-by-block changes in DLs and gripping performances during the learning phase of AM group were compared by multiple comparison. DLs were significantly improved after the second block. Gripping performances were significantly improved after the first block. These results suggested that the progressive development of auditory-motor association and the progressive improvement in discrimination performance co-occurred over the course of learning.
Hide abstract
P1.84 Colour-Shape Correspondences: Examining the Role of Perceptual Features and Emotional Mediation
Dreksler, N. & Spence, C. University of Oxford
Show abstract
Historical accounts of colour-shape correspondences usually start with Wassily Kandinsky’s universal visual language of art and design, which made extensive use of fundamental colours and forms. Kandinsky postulated an underlying and unifying correspondence between the simplest visual feature – lines – and a variety of other sensory dimensions (colour, music, emotions; Kandinsky, 1914). To Kandinsky, a triangle was yellow, a circle blue, and a square was red. Whilst this is, strictly speaking, an example of an intramodal correspondence, modern empirical work in this area has typically been grounded in the field of crossmodal correspondences: That is, bi-directional, non-arbitrary mappings between the attributes (or dimensions) of two sensory modalities, which can give rise to congruency effects in performance and are usually considered to match one another phenomenologically (Spence, 2011).
Kandinsky’s correspondences and other traditional shape stimuli have been explored by various researchers through direct matching and IAT experiments (e.g., Albertazzi et al., 2013; Chen, Tanaka & Watanabe, 2015; Jacobsen, 2002; Makin & Wuerger, 2013). Only a few researchers, notably Malfatti (2014), have used larger and more controlled stimuli sets that are not limited to traditional shapes in order to examine which specific features may drive such correspondences. This paper presents a series of three online experiments that look at the perceptual (complexity, symmetry, roundedness) and affective (liking, arousal) mechanisms that may underlie colour-shape correspondences when a wider array of colours and shapes are presented to participants.
References:
Albertazzi, L., Da Pos, O., Canal, L., Micciolo, R., Malfatti, M., & Vescovi, M. (2013). The hue of shapes. Journal of Experimental Psychology: Human Perception and Performance, 39(1), 37.
Chen, N., Tanaka, K., & Watanabe, K. (2015). Color-shape associations revealed with implicit association tests. PloS one, 10(1), e0116954.
Jacobsen, T. (2002). Kandinsky’s questionnaire revisited: fundamental correspondence of basic colors and forms? Perceptual and Motor Skills 95, 903–913.
Kandinsky, W. (1914). The art of spiritual harmony. London, England: Constable and Co.
Makin, A. D. J., & Wuerger, S. (2013). The IAT shows no evidence for Kandinsky’s color-shape associations. Frontiers in psychology, 4, 616.
Malfatti, M. (2014). Shape-to-color associations in non-synesthetes: perceptual, emotional, and cognitive aspects (Doctoral dissertation, University of Trento).
Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.
Acknowledgments: Funded by the MRC and St. John’s College, University of Oxford.
Hide abstract
P1.85 Mapping the topography of sensory-selective and multiple demand regions in lateral frontal cortex with combined visual, auditory and tactile fMRI
Tobyne, S.M., Noyce, A.L., Brissenden, J.A. & Somers, D.C. Boston University
Show abstract
Our laboratory and others have recently reported that preferences for sensory modality characterize distinct subregions of lateral frontal cortex (LFC). We previously used an auditory/visual sustained attention fMRI task to identify four bilateral interleaved regions in LFC that are selectively recruited for attention to visual or auditory stimuli (Michalka et al., Neuron, 2015); and have since replicated this finding with an auditory/visual working memory paradigm (Noyce et al., JNeurosci, 2017). These regions form separate sensory-selective intrinsic networks with posterior sensory regions. Using data from the Human Connectome Project, we recently extended these sensory-selective networks (Tobyne et al., NeuroImage, 2017). Here, we extend our auditory/visual fMRI paradigms to include tactile stimulation. While prior unimodal tactile studies have reported LFC recruitment, visual-, auditory- and tactile-selective cognitive regions remains to be investigated in concert at the individual subject level. We observe several unique tactile-biased regions of LFC that abut previously identified auditory- and visual-biased regions. We also observe several multiple demand regions that are recruited for all three modalities. The whole-brain intrinsic connectivity profiles of these LFC regions reveal that LFC ROIs possess unique fingerprints of network membership both across and within a sensory modality. Our results elucidate the complex topography of LFC and highlight the specific profiles of connectivity and task recruitment between and across sensory-selective LFC ROIs. Together, these results shed light on the complexity of LFC sensory-selective regions supporting higher-order cognition and reveal that much of an individual’s LFC can be mapped using sensory-selectivity as a guiding principle.
Hide abstract
P1.86 Audio-tactile Crossmodal Correspondences: Listen! How does that feel?
Barnett, A. M., Walker, P. & Bremner, G. Lancaster University
Show abstract
Crossmodal correspondences can be defined by the appreciation of a relationship between two or more sensory channels. For instance, higher-pitched sounds are perceived as being visually pointier than their lower-pitched counterparts. Given the overwhelming evidence for the existence of audio-visual crossmodal correspondences, the aim of this research was to establish whether dimensions of auditory pitch (high/low) continue to align with dimensions of angularity (pointed/rounded) when experienced by touch. In this experiment, 32 children (aged 6-9 years) and 30 adults (19-62 years) paired tones that varied in auditory pitch with objects that varied in tactile angularity. Children and adults assigned higher-/lower-pitched sounds to pointed/rounded objects, respectively. By comparison to children, adults demonstrated a stronger sensitivity to the crossmodal relationship between higher-pitched sounds and pointier objects, possibly indicating a learned component for this form of sensory perception. Findings suggest that crossmodal correspondences, at least for the relationship between auditory pitch and angularity, are less bound by sensory channels than originally considered.
Hide abstract
P1.87 Fast and Slow Process Integration in Visuomotor Learning: Feedback Parameters and Aging
t Hart B.M., Ruttle J., Chauhan U., Straube A., Eggert T &, Henriques D.Y.P. Centre for Vision Research, York University, Toronto Canada
Show abstract
People are incredibly good at adapting their movements to altered visual feedback. In recent years it has become clear that visuo-motor adaptation does not depend on a single process. However, how multi-process motor learning depends on the task or how its’ dynamics change with age is still unclear. Here we investigate how well a two-rate model (Smith et al., 2006) can account for changes in task demands, and how age affects two-rate adaptation. Participants learn to reach with a cursor to a target when the cursor’s position is rotated around the start by 45° degrees. Each participant learns two rotation directions in counterbalanced order and separate parts of the workspace. Crucially, each rotation is learned in a different task, so that we can compare model fits on data from two tasks within each participant. First, we tested if two-rate models are affected by providing continuous or terminal feedback. Terminal feedback disambiguates error size, which would benefit error-based models of motor learning. However, adaptation with terminal feedback could be explained with a single-rate process, so that continuous feedback seems better suited for studying multi-rate motor learning. Second, we tested how older and younger adults learn either an abruptly or gradually introduced rotation. The small difference between the two age groups is not significant, but this may be due to our choice of paradigm. We use the two-rate model to predict an optimal paradigm for studying differences between age groups. Our results limit how to study multi-rate motor learning, especially in older and patient populations and show that within-participant paradigms are not only possible but useful.
Hide abstract
P1.88 “I know that Kiki is angular”: The metacognition underlying sound-shape correspondences
Chen, Y.-C., Huang, P.-C., Woods, A. & Spence, C. Mackay Medical College
Show abstract
We examined people’s ability to evaluate their confidence when making perceptual judgments concerning a classic example of sound symbolism, namely the Bouba/Kiki effect: People typically match the sound “Bouba” to more rounded patterns whereas they match the sound “Kiki” to more angular patterns instead. We used radial frequency (RF) patterns in which the features can be systematically manipulated as visual stimuli in order to induce a continuous change of the consensus in the sound-shape matchings (Chen, Huang, Woods, & Spence, 2016). Participants were asked to match each RF pattern to nonsense word “Bouba” or “Kiki” that were presented auditorily, and then rated how confident they were regarding their matching judgment. For each visual pattern, individual participant was more confident about his/her own matching judgment when it happened to fall in line with the consensual response regarding whether the pattern was rated as Bouba or Kiki. Logit-regression analyses demonstrate that participants’ matching judgments and their confidence ratings were predictable by similar regression functions when using visual features as predictors. This implies that the consensus and confidence underlying the Bouba/Kiki effect is underpinned by a common process whereby visual features in the patterns are extracted and then used to match the sound following rules of crossmodal correspondences. Combining both matching and confidence measures therefore allows researchers to explore and quantify the strength of crossmodal associations in human knowledge.
Acknowledgments: YCC and CS were supported by the Arts and Humanities Research Council (AHRC), Rethinking the Senses grant (AH/L007053/1). YCC is supported by Ministry of Science and Technology in Taiwan (MOST 107-2410-H-715-001-MY2). PCH is supported by Ministry of Science and Technology in Taiwan (NSC 102-2420-H-006-010-MY2 and MOST 105-2420-H-006-001-MY2).
Hide abstract
P1.89 Imagery clarifies confusion in the crossed-hands deficit
Lorentz, L., Unwalla, K. & Shore, D.I. Department of Psychology, Neuroscience & Behaviour. McMaster University
Show abstract
Localizing our sense of touch requires integrating internal, body-based cues, with external, predominantly visual cues. Placing the hands in a crossed posture puts the reference frames for these cues into conflict, producing a crossed–hands deficit (CHD; decreased accuracy in a tactile temporal order judgment (TOJ) task when the hands are crossed). Removing visual information by blindfolding reduces the CHD, presumably by degrading the external reference frame, and therefore decreasing the conflict. However, blindfolding does not eliminate the deficit. This suggests that participants may still have access to visual information: perhaps they spontaneously imagine their crossed hands. Since visual imagery relies on activation of brain areas similar to those used in visual perception, this imagery may provide access to the visual (external) reference frame. To test this hypothesis, we asked blindfolded participants to imagine their hands uncrossed, even though they were crossed, while performing a tactile TOJ task. Information in the external reference frame should now be compatible with that in the internal reference frame, resulting in reduced conflict, and consequently, a decreased CHD. Participants completed three ordered blocks: uncrossed, crossed, and crossed with uncrossed imagery instructions. Participants also completed several self-report measures of mental imagery ability. Instructions to imagine the hands uncrossed produced a numerical trend toward a smaller deficit. However, the more interesting finding was a positive correlation between imagining ability and the size of the crossed-hands deficit. Those with strong mental imagery had a significantly larger CHD, providing an example of another individual difference affecting this measure. Future theorizing on the internal–external reference frame translation must consider, or control for, the influence of mental imagery
Hide abstract
P1.90 Implied tactile motion: Localizing dynamic stimulations on the skin
Merz, S.,Meyerhoff, H.S., Spence, C. & Frings, C. University of Trier
Show abstract
We report two experiments designed to investigate how implied movement during tactile stimulation influences localization on the skin surface. Understanding how well tactile sensations can be localized on the skin surface is an important research question for those working on the sense of touch. Interestingly, however, the influence of implied motion on tactile localization has not been investigated before. Using two different experimental approaches, an overall analogue pattern of localization shifts to the visual and auditory modality is observed. That is, participants perceive the last location of a dynamic stimulation further along its trajectory. In Experiment 1 (N = 38), participants judged whether the last vibration in a sequence of three vibrations was located closer to the wrist or elbow. In Experiment 2 (N = 21), participants indicated the last location on a ruler which was attached to their forearm. We further pinpoint the effects of implied motion on tactile localization by investigating the independent influences of motion direction and perceptual uncertainty. Taken together, these findings underline the importance of dynamic information in localizing tactile stimuli on the skin. These results also indicate modality specific-differences in the localization of approaching vs. receding stimuli, hinting at different functions of localization in different modalities.
Hide abstract
P1.91 Perception as Cognition: Beyond the Perception/Cognition Distinction
Hipolito, I. University of Wollongong
Show abstract
This paper argues that perceiving belongs within the cognitive fold, not outside of it. It reviews and rejects the rationale for drawing sharp distinction between perception and cognition introduced by modular theories of mind. Modular theories assume that perception has an informational encapsulated character that marks it out as different from other forms of cognition. These theories will be rejected on the explanatory power of predictive processing, which dissolves the perception/cognition distinction. Predictive processing accounts however have been challenged for embracing a problematic, overly intellectualist vision of cognition across the board. It is shown that, even if we accept the full force of such critiques, there is a way to construe the predictive processing proposal such that it leaves space for a more nuanced account of perception – one that embraces the right degree of intellectualism and provides a way of retaining some important insights from the failed modular theories of perception. Finally, it is shown that reading predictive processing theories through this lens does not give us reason to think of any form of perceiving as non-cognitive – rather, it enables us to see all forms of perception as forms of cognition.
Hide abstract
P1.92 Mental Rotation of Digitally-Rendered Haptic Representation
Tivadar, R.I., Rouillard, T., Chappaz, C., Knebel, J.F., Turoman, N., Anaflous, F., Roche, J. & Murray, M.M. University Hospital Center and University of Lausanne
Show abstract
Owing to neuroplasticity of visual cortices, several functions can be retrained after loss of vision using sensory substitution. Tactile information, for example, can support functions such as reading, mental rotation, and exploration of space. Extant technologies typically rely on real objects or pneumatically-driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate tactile sensations (www.hap2u.net). We studied such a new type of technology that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibration of varying amplitude to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information, and to manipulate such representations in a mental rotation task. Normally sighted, blindfolded volunteers were trained on randomly assigned pairs of two letters (L and P or F and G) on a haptic tablet. They then felt all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated their perception of the form by mouse button presses. We observed a prototypical effect of rotation angle on performance (i.e. greater deviation from 0° resulted in greater impairment), consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed stimuli. Our findings extend existing research in multisensory integration by indicating that a new technology with simulated active haptic feedback can support the generation and spatial manipulation of mental representations of objects. This technology may thus offer an innovative solution to the mitigation of visual impairments and to the training of skills dependent on mental representations and their spatial manipulation.
Hide abstract
P1.93 Audio-visual multiple object tracking: integration differences with age
Harrar, V., Roudaia, E. & Faubert, J. School of Optometry, Université de Montréal
Show abstract
The ability to track objects as they move is critical for successful interaction with objects in the world. The multiple object tracking (MOT) paradigm has demonstrated that, within limits, our visual attention capacity allows us to track multiple moving objects among distracters. Very little is known about dynamic auditory attention and the role of multisensory binding in attentional tracking. Here, we assessed whether dynamic sounds congruent with visual targets could facilitate tracking in a 3D-MOT task in 35 young (18-36) and 35 older adults (60-75). Participants tracked one or two target-spheres among identical distractor-spheres while they moved inside a 3D cube for 8 s. at an individually-adjusted speed. In the no sound condition, targets were identified by a brief colour change, but were then indistinguishable from the distractors during the movement. In the audio-visual condition, each target was accompanied by a sound that moved congruently with the target. In the audiovisual control condition, the movement of the sound was incongruent with the target’s movement, the sound accompanied a distractor sphere. The amplitude of the sound varied with distance from the observer, the pitch of the sound varied with vertical elevation, and inter-aural amplitude difference varied with azimuth. In young adults sound with targets improved tracking, but only with a single target. Older adults showed no effect of sound overall, although older adults who were tracking targets at higher speeds were more likely to benefit from sound than the rest of the sample. Together, these results suggest that audiovisual binding in dynamic stimuli may be limited to a single target and may be less common in ageing.
Hide abstract
P1.94 Proprioceptive Distance Cues Restore Perfect Size Constancy in Grasping, but Not Perception, When Vision Is Limited
Chen, J.C., Sperandio, I.S. & Goodale, M.A.G. University of Western Ontario
Show abstract
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) versus size constancy in grasping (mediated by the dorsal visual stream), in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used.