Poster session 2

P2.1 Virtual Reality modulates Vestibular Brain Responses

Gallagher, M., Dowsett, R. & Ferrè, E.R.
Royal Holloway University of London

Show abstract

Virtual reality (VR) has become increasingly popular in the past decade. Key to the user’s VR experience are multimodal interactions involving all senses. However, sensory information for self-motion is often conflicting in VR: while vision signals that the user is moving in a certain direction with a certain acceleration (i.e. vection), the vestibular organs provide no cues for linear or angular acceleration. To solve this conflict, the brain might down-weight vestibular signals. Here we recorded participants’ physiological responses to actual vestibular events while being exposed to VR-induced vection. We predicted that exposure to a few minutes of linear vection would modulate vestibular processing. Vestibular-evoked myogenic potentials (VEMPs) were recorded during exposure to either a randomly moving (no-vection condition) or expanding field of dots (vection condition). A significant enhancement in VEMPs P1-N1 peak-to-peak amplitude was observed in the vection condition compared to the no-vection condition, for vestibular stimuli activating the right cortical vestibular projections. Our results suggest that exposure to VR modulates brain responses to vestibular stimuli. This supports the idea of a sensory re-weighting occurring in VR.

Hide abstract

 


P2.2 Cybersickness in virtual reality partially explained by temporal binding window width

Sadiq, O. & Barnett-Cowan, M.
University of Waterloo

Show abstract

Virtual reality (VR) is a computer-generated simulation that largely manipulates visual surroundings that are updated when the observer moves in the real world. VR often causes sickness (known as cybersickness) in users, perhaps due to temporal and spatial discrepancies between multisensory cues from the virtual and real environments. While spatial discrepancies are largely resolved in current head mounted display VR experiences, temporal discrepancies still persist and are often in the order of ~22ms or more from head movement to visual updating. Here we sought to assess whether individual differences in the ability to bind multisensory cues in time within a “temporal binding window” (TBW) are related to cybersickness. We tested 11 participants in two different tasks. The first task involved two temporal order judgements, 1) an audio-visual (AV) and 2) an audio-active head movement (AHMa) task where participants were presented with sound paired with a visual or head movement stimulus at different stimulus onset asynchronies. The second task involved exploration of two VR experiences for 30 mins each where participants’ sickness was quantified every 2 minutes on the fast motion sickness scale and also at the end using the simulator sickness questionnaire (SSQ). Results show strong positive correlations between SSQ scores and TBW width from the AV task, indicating those with wider AV TBWs as more susceptible to cybersickness. Correlations of SSQ and AHMa TBWs were non-significant. We conclude that while the CNS integrates information from all sensory modalities to navigate through VR experiences, our data suggests that individual differences in processing the relative timing of visual cues play a dominant role in predicting cybersickness. Our results will help develop novel assessment tools to predict cybersickness and hopefully can be used to develop new tools to reduce sickness tailored to individual differences in processing multisensory information.

Hide abstract

 


P2.3 Sensitivity to visual gain modulation in head-mounted displays depends on fixation

Moroz M., Garzorz I., Folmer E. & MacNeilage P
University of Nevada, Reno

Show abstract

A primary cause of simulator sickness in head-mounted displays (HMDs) is the rendering of visual scene motion that does not match head motion. Agreement between visual scene motion and head motion can be quantified based on their ratio which we refer to as visual gain. We suggest that it is useful to measure perceptual sensitivity to visual gain modulation in HMDs (i.e. deviation from gain=1) because conditions that minimize this sensitivity may prove less likely to elicit simulator sickness. In prior research, we measured sensitivity to visual gain modulation during slow, passive, full-body yaw rotations and observed that sensitivity was reduced when subjects fixated a head-fixed target compared with when they fixated a scene-fixed target. In the current study, we investigated whether this pattern of results persists when 1) movements are faster, active head turns, and 2) visual stimuli are presented on an HMD rather than on a monitor. Subjects wore an Oculus Rift CV1 HMD and viewed a 3D scene of white points on a black background. On each trial, subjects moved their head from a central position to face a 15 deg eccentric target. During the head movement they fixated a point that was either head-fixed or scene-fixed, depending on condition. They then reported if the gain applied to the visual scene motion was too fast or too slow. Gain on subsequent trials was modulated according to a staircase procedure to find the gain change that was just noticeable. Sensitivity to gain modulation during active head movement was reduced during head-fixed fixation, similar to what we observed during passive whole-body rotation. We conclude that fixation of a head-fixed target is an effective way to reduce sensitivity to visual gain modulation in HMDs, and may also be an effective strategy to reduce susceptibility to simulator sickness.

Acknowledgments: Research was supported by NIGMS of NIH under grant number P20 GM103650.

Hide abstract

 


P2.4 A common cause in the phenomenological and sensorimotor correlates of body ownership

Samad, M., Parise, C., Keller, S. & Di Luca, M.
Oculus Research

Show abstract

The feeling that our limbs belong to our body is at the core of bodily self-consciousness. Over the years, limb ownership has been assessed through several types of measurements, including questionnaires and sensorimotor tasks assessing the perceived location of the hand with a visual-proprioceptive conflict. Some studies report a correlation between the phenomenological and sensorimotor measures, whereas others report no relationship. This inconsistency prevents a unified operational definition of limb ownership. We sought to jointly record these two measurements to assess whether they originate from the same process. To that end, we used state-of-the-art hand tracking technology in virtual reality to induce ownership over a virtual limb while we parametrically manipulated spatial and temporal incongruences. Participants reported the subjective ownership and pointed to a target without seeing their hand to assess perceived hand location. Results show a surprisingly tight correlation between phenomenological and sensorimotor measures. We frame limb ownership as a multisensory integration problem, whereby the brain computes the probability that visual and proprioceptive signals have a common cause – and thus that the visually presented hand belongs to one’s body – and based on this determines the perceived hand location considering the reliability of the sensory signals. The outcome of the computation thus determines both the position of the hand and the strength of the ownership on which the subjective feeling should be based. We show that a Bayesian Causal Inference model closely captures human responses in both tasks, reconciling a fragmented literature and suggesting that body ownership can be well explained by a normative framework that has also been shown to account for a variety of other multisensory phenomena.

Acknowledgments: We acknowledge the support of a postdoc fellowship of the German Academic Exchange Service (DAAD)

Hide abstract

 


P2.5 The balance of evidence: Estimating the influence of contributors to cybersickness

Weech, S., Varghese, J.P., Duncan, R.E. & Barnett-Cowan, M.
University of Waterloo, Department of Kinesiology

Show abstract

Despite the wide-ranging potential of virtual reality (VR), use of the technology is currently limited to enthusiasts. One major cause for this limited uptake is cybersickness, consisting of symptoms such as nausea and disorientation, which prevents adoption and reduces the likelihood that the user continues to use VR. The causes of related phenomena (e.g., sea-sickness, visually-induced motion sickness) have been subject to theorizing for several hundred years, but progress on a solution is slow. New approaches are required if society is to benefit from the promise of VR. Most studies of cybersickness focus on the impact of a single factor (e.g., balance control, vection), while the contributions of other factors are overlooked. However, accounting for the vast inter-individual variability in cybersickness will require the contributions of multiple predictors to be estimated. Here, we characterize how the complex relationship between balance control, vection susceptibility, and vestibular thresholds relates to cybersickness. We collected indices from a battery of sensory and behavioural tests, predicting that we would find an independent influence of each factor, and a complex multivariate interaction. In a 3 hour session, participants conducted tasks that measured balance control, responses to vection, and vestibular sensitivity to self-motion. While vestibular thresholds and most balance control measures demonstrated a low predictive value, the results showed that cybersickness is significantly predicted by a combination of vection responses and perturbed-stance balance control measures. In particular, high vection susceptibility appears to have a protective effect against cybersickness. These results complement a long line of research on vection and visually-induced motion sickness. The findings improve our understanding of an enduring obstacle to the adoption of VR, and will guide the development of therapeutic interventions. We address the prospect that genetic factors might play a role in cybersickness, and discuss the challenges involved in answering this question.

Hide abstract

 


P2.6 Rubber hand/foot illusion in older adults

Teramoto, W. & Hide, M.
Kumamoto University

Show abstract

Studies have reported that several perceptual and cognitive functions are altered with age. However, little is known about multisensory processing involved in bodily perception. The present study investigated older adults’ body representations and its link to their sensorimotor functions using rubber hand illusion (RHI) and rubber foot illusion (RFI). Twenty-four older adults participated in this study. Participants viewed a rubber hand or foot stimulated in synchrony or asynchrony with their own hidden hand or foot for 5 min. RHI and RFI were assessed with questionnaires, proprioceptive drift, and onset times. Sensorimotor functions were independently assessed with the Timed Up and Go (TUG) test, which is one of the clinical tools for examining functional mobility and fall risk in older adults. The participants were divided into two groups based on their TUG scores. Results showed that subjective ratings for body ownership and location were not different in terms of body parts or in the TUG groups. However, proprioceptive drift was larger for foot than hand, especially for the group with a relatively poor TUG performance. Additionally, the onset time was shorter for this group than for the group with relatively better TUG performance. These results suggest that the relative importance of visual over proprioceptive information to localize older adults’ own body parts can be changed depending on the body part and that it may be closely linked to decline in sensorimotor functions related to gait and balance.

Acknowledgments: This study was supported by JSPS KAKENHI Grant (S) (No. 16H06325) and (B) (No. 26285160).

Hide abstract

 


P2.7 AN AUDIO GAME TO HELP CHILDREN AND YOUNG PEOPLE IN DEVELOPING COGNITIVE ASSOCIATIONS BETWEEN SOUNDS AND WORDS

Setti, W., Cuturi, L. F., Cocchi, E. & Gori, M.
Italian Institute of Technology

Show abstract

Spatial memory is based on the capability to memorize and retrieve the locations of objects in the environment. In order to remember the position of an object, the individual can rely on its intrinsic characteristics such as the shape, colour or smell. Here we investigate how the auditory modality might affect spatial memorization, based on two different associations: between identical sounds and between a sound and its related concept. We focused on how these association processes develop during the lifespan. To this aim, we tested children (6-8 y.o) by implementing an audio memory test in the form of a game (i.e. the classical memory game), with two experimental conditions. In the first condition (semantic sounds), each participant is asked to find pairs of animal calls. In the second condition (semantic words), the task is to pair each animal call with a recorded voice that reproduces animal’s name. The sounds were played through a device composed of 25 loudspeakers covered by tactile sensors. A cardboard grid was placed on its surface. Each time a participant found a pair, the experimenter covered the slots with cardboards. Two parameters were evaluated: the score reached by the participants in the two conditions and the number of tries employed to complete the game. Our results show a tendency to perform worse in the semantic words condition as participants reached a lower score and ended the game with a higher number of tries. The participants have more difficulties in pairing the animal call and the name. We discussed this result in terms of the role of human growth and experience since children learn word meanings gradually by adding more features to their lexical entries. The game could be used as a rehabilitative paradigm to structure the link between sounds and concepts related to them

Hide abstract

 


P2.8 Audio-haptic cue integration across the lifespan

Scheller, M., Proulx, M. J. & Petrini, K.
University of Bath

Show abstract

Optimal integration of multisensory information has frequently been shown to benefit perception by speeding up responses and increasing perceptual precision and accuracy. These effects, however, are often demonstrated in young adults, and to a lesser extent in older adults or children, with a scarcity of studies examining how optimal integration changes across the lifespan. Furthermore, most studies have used different tasks and approaches to measure multisensory processing adaptation over large age ranges, making it difficult to compare between them. Here, by using the same adaptive size discrimination task and a cross-sectional design, we investigated how audio-haptic cue integration performance changes over the lifespan in children, younger adults, and older adults (age range spanning from 7 to 70years). Participants were asked to give size discrimination judgements for physical objects of different sizes using either touch or hearing (unimodal) or both at the same time (bimodal). Discrimination thresholds were assessed for unimodal and bimodal stimulus presentation and compared with predictions from a maximum likelihood estimation (MLE) model. Results show that children do not make use of audio-haptic multisensory size information until around 13 years of age, while both younger and older adults benefit from integrating multisensory information, leading to increased precision. These results corroborate and extend the findings from previous studies that used different approaches, showing that children only gain from multisensory integration late in childhood, but that its benefits are preserved until later in life. It further suggests that integration of non-visual information, which becomes increasingly important with declining visual function later in life, allows individuals to effectively make use of redundant information, thereby offering an advantageous compensatory mechanism for declining sensory function.

Hide abstract

 


P2.9 Mechanisms of audiovisual integration in younger and healthy older adults

Jones, S.A., Beierholm, U. & Noppeney, U.
Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham

Show abstract

As we age, multisensory integration becomes increasingly critical for effective interaction with the environment. Some studies have shown greater multisensory benefits for older than younger adults. Others have suggested that older adults weight sensory signals suboptimally when compared to the predictions of Maximum Likelihood estimation.

Combining psychophysics, fMRI, multivariate pattern analysis (MVPA) and Bayesian Causal Inference (BCI) modelling, we investigated the computational operations and neural mechanisms mediating audiovisual integration of spatial signals in younger and older adults. In a spatial ventriloquist paradigm, we presented younger and older adults with synchronous auditory and visual signals at various levels of spatial conflict and reliability. Participants located the sounds in a selective attention task or judged whether auditory and visual signals came from a common source.

Our results revealed no significant effects of ageing on spatial location or common source judgments, as indicated by response choices or the parameters of a BCI model fitted individually to each participant’s responses. However, older participants’ response times for common source judgment and related selective attention tasks were significantly affected by the spatial congruence of the stimuli. This suggests that while older adults’ ability to integrate and respond to audiovisual stimuli is preserved (given sufficient time), they may be recruiting additional resources when arbitrating between integration and segregation of stimuli. We discuss these results in relation to a subsequent fMRI study, in which we presented younger and older participants with a similar ventriloquist task and applied MVPA to decode audiovisual spatial representations across the cortical hierarchy.

Hide abstract

 


P2.10 Age-related brain changes in multisensory representation of hand movement

Landelle, C., Sein, J., Nazarian, B., Anton, J.L., Félician, O. & Kavounoudias, A.
LNSC

Show abstract

To perceive self-movements, the central nervous system relies on multiple sensory inputs including touch and muscle proprioception. We have previously identified a change in the relative contribution of this senses to perceive hand movements due to a greater alteration in muscle proprioception in the elderly1. The present study investigated whether these perceptual changes correlate with neural plastic changes using fMRI.

To this end, illusory sensations of right hand rotations were induced in young and old adults by stimulating separately the two modalities at two intensities (Low and High). The proprioceptive stimulation was induced by a mechanical vibration applied to their wrist muscle, while the tactile stimulation consisted in a textured disk rotating under their hand. Participants underwent a first experiment to estimate their ability to discriminate the velocity of hand movement illusions before being tested inside a 3-Tesla MRI scanner.

Results show that a common sensorimotor network is activated in both groups during movement illusions of tactile and proprioceptive origin: the contralateral primary sensorimotor cortices, bilateral inferior parietal lobule, supplementary motor area, insula and ispsilateral cerebellum. However, group comparisons revealed a broadening of this network in the ipsilateral hemisphere for older adults, which correlates with declining individual performance to perceive illusion velocity from proprioceptive but not tactile origin.

The present findings show that age-related changes in kinesthetic perception from proprioceptive origin may be at least partly due to a central alteration of the interhemispheric balance between the primary sensorimotor regions in the elderly.

1. Landelle et al. NeuroFrance 2017

Hide abstract

 


P2.11 Vestibular Sensitivity in Older Adults and Individuals with Age-Related Hearing Loss

Campos, J.L., Gnanasegaram, J., Harris, L.R., Cushing, S., Gordon, K., Haycock, B. & Gabriel, G.
Toronto Rehabilitation Institute, University Health Network

Show abstract

Several recent epidemiological studies have identified a reliable association between age-related hearing loss (ARHL) and an increased risk of falling in older adults. However, the exact nature of this relationship remains unclear. One possibility is that there are parallel declines in the sensitivities of both the auditory and vestibular systems in individuals with ARHL. Because the auditory and vestibular organs both use mechanical displacement of hair cells to sense sound and head movement/gravity respectively, are located in close proximity to each other, and inputs from both systems travel to the brain via the vestibulocochlear nerve, it is possible that age-related changes are occurring due to a common cause within an individual. Here we tested this hypothesis by investigating whether older adults with ARHL also have an impaired ability to perceive passive self-motion in the dark compared to their age-matched peers without hearing loss. Younger adults (18-35) and older adults with normal hearing (65+) and older adults with ARHL participated. A motion platform passively rotated (pitch) and translated (heave) participants. A two-interval forced choice task was implemented in which participants identified in which of the two intervals they were moved. Detection thresholds were determined using an adaptive psychophysical staircase procedure. Measures of hearing ability (pure-tone thresholds), vestibular function (video head impulse test, vestibular evoked myogenic potential), and balance (posturography) were also conducted. Results showed that older adults had a reduced sensitivity in detecting heave motion compared to younger adults. Older adults with ARHL demonstrated a reduced sensitivity in detecting heave and pitch movements compared to normal hearing peers. Older adults with ARHL also demonstrated less stable standing balance. Overall, these results indicate that age-related hearing loss may be associated with poorer-than-normal vestibular sensitivity, which may underlie an increased risk of falls.

Hide abstract

 


P2.12 Two Signals For Hand Localization – No Optimal Integration

t Hart, B.M. & Henriques, D.Y.P.
Centre for Vision Research, York University, Toronto, Canada

Show abstract

Knowing where your limbs are, is important for evaluating movements – with internal models – and for planning new movements. When people are asked to localize their unseen hand after a movement, they have two non-visual signals available: predicted sensory consequences, based on an efference copy of the motor command, as well as felt hand position, or proprioception. Using a maximum likelihood estimate, people would integrate these two signals with weights depending on each signals’ reliability, and the combined estimate would have higher reliability than each signal individually, evident as a lower variance. While we can’t measure hand location estimates based on predicted sensory consequences in isolation, we can measure hand location estimates based on proprioception alone and on the two signals combined. In a previous paper (‘t Hart & Henriques, 2016) we found no evidence of maximum likelihood estimation as the variance of the responses was approximately equal. However, there were very few trials, and a relatively small group of participants, potentially obscuring effects. Here we have almost triple the data per participant and have over 80 younger participants. In this larger dataset there is again no evidence of maximum likelihood estimation. In a group of older participants (N=20, age: 65+) the variance in responses is not larger suggesting there is no significant effect of age on estimates of limb position and this group also shows no signs of optimal integration. So far, it seems that the brain does not integrate predicted sensory consequences with actual sensory information similarly to how the brain integrates signals from two sensory modalities. This might be required to compare integrated sensory-only estimates of hand location with efference-predicted estimates in order to update internal models during motor learning.

Hide abstract

 


P2.13 CHANGES IN THE PERCEPTION OF THE PERIPERSONAL SPACE DURING PREGNANCY

Cardini, F., Fatemi-Ghomi, N., Gooch, V. & Aspell, J.E.
Anglia Ruskin University

Show abstract

The space immediately surrounding our body – i.e. ‘peripersonal space’ (PPS) – is important, as it is where we interact with stimuli in the external world. Recent studies have shown that the PPS boundaries are malleable. Therefore with our study we aimed at investigating whether the PPS changes during pregnancy, a critical stage in life, when extremely rapid changes occur in the body size and shape. Given the rapidity of these changes, we expected that as pregnancy advances, the PPS should expand, reflecting an updated mental representation of one’s body that, as the abdomen increases, makes external stimuli, initially perceived as being outside of the PPS, to be perceived closer, within the PPS.

To this aim, we tested 37 pregnant women and 19 non-pregnant women three times: at the 20th and at 34th week of the gestational period and 8 weeks postpartum (same time intervals for the control group). To assess the PPS boundaries we used a well-established audio-tactile task (Canzoneri et al., 2012).

By comparing the boundaries between groups we found that whereas at the first and the third testing period no differences in the PPS size were observed, in the second period – i.e. at the advanced stage of pregnancy – the pregnant participants’ PPS was larger than the controls’.

To conclude, during pregnancy our brain adapts to the sudden change in body size, by expanding the representation of the space around us, possibly in order to protect the vulnerable abdomen from bumping against objects.

Hide abstract

 


P2.14 Multisensory influences in locomotor development

Schmuckler, M. A.
University of Toronto Scarborough

Show abstract

Currently there exists compelling evidence that, in adults, successful locomotion through the world requires significant multisensory information and control. The most straightforward example of such multisensory control of locomotion arises in research investigating aspects of visually-guided locomotion, exploring peoples’ abilities to navigate around obstacles, over barriers, and through apertures. More recently researchers have been exploring the developmental trajectories of such multisensory control, examining young toddlers’ and children’s abilities to guide themselves around a cluttered environment, as well as the ability to make use of and integrate an array of multisensory inputs during locomotion. Work in my lab examines toddlers’ (14- to 24-month-old children), children’s (3 to 6 years) and adults’ walking skill as a function of widely varying multisensory information and visual-guidance requirements. One series of experiments explored the impact on toddlers’ walking skill of providing additional haptic information via object carriage when having to cross or not cross barriers in ones’ path. A subsequent study extends these investigations by examining the impact on toddlers’ and adults’ gait of varying proprioceptive inputs while navigating along paths requiring varying forms of visual-guidance. In this case, proprioceptive inputs were manipulated by either loading or not loading the limbs (applying 15% of body weight to the legs). Visual-guidance was manipulated by requiring walking under conditions of free locomotion (wide pathway, no obstacles), constrained locomotion (narrow pathway, no obstacles), and guided locomotion (narrow pathway, obstacles). These studies converge in finding that, from early in life, toddlers show significant multisensory influences on their locomotion in the world, on both a gross behavioral level as well as on low-level kinematic parameters of gait.

Hide abstract

 


P2.15 Maintained cross-modal control in aging: Unimodal and cross-modal interference follow different lifespan trajectories

Hirst, R.J., Allen, H.A. & Cragg, L
University of Nottingham

Show abstract

In two experiments we assessed whether unimodal and cross-modal interference follow similar patterns of development and deterioration across the lifespan and whether unimodal and cross-modal interference occur at similar levels of processing. In experiment 1, children (n=42; 6-11), younger adults (n=31; 18-25) and older adults (n=32; 60-84) performed unimodal and cross-modal Stroop tasks. Colours and words could be congruent, incongruent but mapped to the same response (stimulus-incongruent) or incongruent and mapped to different responses (response-incongruent), thus separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference followed a U-shape lifespan trajectory, however, older adults maintained the ability to ignore cross-modal distraction. Unimodal interference produced accuracy decrements due to response interference whilst cross-modal interference did not. In experiment 2, we compared the effects of auditory and visual cross-modal distractors in children (n=52; 6-11 years), young adults (n=30; 22-33 years) and older adults (n=30; 60-84 years). Neither type of cross-modal distraction followed the U-shape trajectory seen in unimodal conditions. Older adults maintained the ability to ignore both visual and auditory cross-modal distractions. Unimodal and cross-modal interference appear to follow different lifespan trajectories and this may, in part, be due to differences in the processing level at which interference takes effect.

Hide abstract

 


P2.16 Altered Audiovisual Processing and Perception following a Loss of Inhibition in the Multisensory Cortex of the Rat

Schormans, A.L. & Allman, B.L.
University of Western Ontario

Show abstract

Multisensory processing is a hallmark of the mammalian cortex; however, it remained uncertain what neuropharmacological factors contribute to the integration and accurate perception of multisensory stimuli. In the present study, we sought to investigate for the first time the role of local inhibition on cortical audiovisual processing and perception. In vivo extracellular electrophysiology recordings were completed across the layers of the rat lateral extrastriate visual cortex (V2L) — a region known to respond to both auditory and visual stimuli — before and after the local micro-infusion of the GABA-A antagonist, Gabazine. Ultimately, the effect of blocking inhibition on audiovisual processing was assessed at the level of local field potentials and spiking activity in response to auditory and visual stimuli presented alone or in combination. Subsequently, we used a separate group of rats trained on an audiovisual temporal order judgment (TOJ) task to determine the effect of local Gabazine micro-infusion in the V2L cortex on the ability to perceive the relative timing of audiovisual stimuli presented at various stimulus onset asynchronies. Using our established laminar analyses, the loss of inhibition in V2L resulted in a 4x increase in sensory responsiveness irrespective of stimulus modality. Surprisingly, despite an increase in peak latency to both auditory and visual stimuli, only the response to visual stimulation showed a delayed onset. Consistent with these electrophysiological results, a loss of inhibition in the V2L cortex during the TOJ task altered the rats’ audiovisual perception, such that the visual stimulus needed to be presented well before the auditory stimulus in order for the stimuli to be perceived as having been presented simultaneously (vehicle: 3.9 ± 7.9ms; Gabazine: 45.5 ± 11.9ms). Taken together, our results suggest that a loss of cortical inhibition has significant implications on audiovisual processing and perception.

Hide abstract

 


P2.17 Pre-attentive and Perceptual Audiovisual Temporal Processing in Rats Lacking the Autism Candidate Gene CNTNAP2

Scott, K., Schormans, A., Schmid, S. & Allman, B.
Western University

Show abstract

Pre-attentive and perceptual integration of multisensory information is necessary for appropriate interactions with our environment. In individuals with autism spectrum disorders (ASD), impairments in lower-level audiovisual processing can impact higher-order functions that rely on the ability to integrate complex auditory and visual signals across time. At present, the neural basis for these behavioural deficits remains unresolved. Preclinical animal models could help to reveal the molecular mechanisms of lower-level audiovisual processing impairments if they can first show high face validity for the ASD-related behavioral deficits. To that end, we assessed both pre-attentive and perceptual audiovisual processing in rats lacking the autism candidate gene CNTNAP2 (Cntnap2-/- rats) using translational behavioural paradigms. Pre-attentive audiovisual processing was examined utilizing the acoustic startle response (ASR) and its modulation by an auditory, visual, or audiovisual stimulus (i.e., prepulse) which occurred before the acoustic startle-eliciting stimulus. In addition, the rats’ ability to perceive the relative timing of audiovisual stimuli was assessed using a temporal order judgement (TOJ) task, consistent with studies on humans. As expected, the Cntnap2-/- rats exhibited a general impairment in prepulse attenuation of the ASR compared to age-matched wildtype controls. Interestingly, Cntnap2-/- rats, like wildtypes, showed a greater level of prepulse inhibition when the audiovisual prepulse was presented compared to the unimodal prepulse conditions; findings which indicate that the brainstem of the knockout rats was still able to integrate auditory and visual stimuli. At the perceptual level, Cntnap2-/- rats did not show a deficit in their ability to judge the relative order of the auditory and visual stimuli, which was consistent with previous studies in the autistic population performing the TOJ task with simple flash-beep stimuli. Taken together, these preliminary results highlight the validity of Cntnap2-/- rats as a preclinical model for studying audiovisual processing associated with ASD.

Hide abstract

 


P2.18 Facilitation of speech-in-noise perception from visual analogue of the acoustic amplitude envelope

Yuan, Y. & Lotto, A. J.
University of Florida

Show abstract

It is well-known that an accompanying visual presentation of a talker can increase the accuracy of perception of speech in noise. Whereas some of the benefit of the visual stimulus may be disambiguating particular phonetic segments (e.g., the lip closure during a /b/), it is also possible that the dynamic movement of the mouth provides an analogue of the acoustic amplitude envelope. Information about the amplitude envelope could allow the listener to better track the speech signal in the noise. To test this hypothesis, listeners were presented spoken sentences in babble noise either in auditory-only or auditory-visual conditions. In this case, the visual stimulus was a sphere that increased and decreased in size synced to the amplitude of the speech signal (the envelope was extracted prior to being mixed with the babble noise). A significant improvement in accuracy in the auditory-visual condition was obtained even though there was no visual representation of phonetic information. These results provide evidence that visual representations of the amplitude envelope can be integrated online in speech perception. Not only do these data speak to the underlying benefits of visual presentations of talkers but opens up other possible techniques of improving speech in noise perception. In particular, the current technique can provide more veridical information about the amplitude envelope than can be inferred from visual displays of mouth/face kinematics.

Hide abstract

 


P2.19 Frontal lobe network contributions to auditory and visual cognition

Noyce, A.L., Tobyne, S.M., Michalka, S.W., Shinn-Cunningham, B.G. & Somers, D.C.
Boston University

Show abstract

Human caudolateral frontal cortex (LFC) is often characterized as domain-general or multiple-demand, due to its recruitment in a wide range of cognitive tasks (e.g. Duncan 2010, Fedorenko 2013). However, our laboratory has recently used fMRI to demonstrate that a direct contrast of visual and auditory attention (Michalka 2015) or visual and auditory working memory (Noyce 2017 robustly identifies a number of discrete sensory-biased regions in caudolateral frontal cortex. Three bilateral visual-selective areas, superior and inferior precentral sulcus (sPCS and iPCS) and mid inferior frontal sulcus (mIFS) are interleaved with three bilateral auditory-selective areas, the transverse gyrus bridging precentral sulcus (tgPCS), caudal inferior frontal sulcus/gyrus (cIFS/G), and frontal operculum (FO). These frontal structures also participate in sensory-specific functional networks with posterior visual (IPS/TOS) and auditory (STG/S) regions.

Acknowledgments: Supported in part by NIH R01-EY022229, NIH F32-EY026796, F31-MH101963, and by the Center of Excellence for Learning in Education Science and Technology, National Science Foundation Science of Learning Center Grant SMA-083597.

Hide abstract

 


P2.20 Recalibration of vocal affect by a dynamic or static face

Baart, M., Keetels, M. & Vroomen, J.
Department of Cognitive Neuropsychology, Tilburg University, the Netherlands

Show abstract

Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to dynamic videos of a ‘happy’ or ‘fearful’ face in combination with a sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more ‘happy’ when the exposure phase contained ‘happy’ instead of ‘fearful’ faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. When the prosody of the exposure sentence was non-ambiguous and congruent with the face, aftereffects went in the opposite direction, reflecting adaptation. In a second experiment, we showed that recalibration could also be observed when the visual stimulus seen during exposure was static (i.e., a still frame of the video) rather than dynamic. Importantly, aftereffects in both the dynamic and static AV exposure conditions were larger than in a visual-only exposure condition where no cross-modal learning was possible. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by dynamic and static visual information.

Hide abstract

 


P2.21 Optimal multisensory integration precedes optimal time estimation

Murai, Y. & Yotsumoto, Y.
University of California, Berkeley

Show abstract

Time is an amodal perceptual attribute that can be defined by any sensory modality. Since events in the outer world often generate multiple sensory signals, our brain estimates event time efficiently by collating multisensory information in a statistically optimal way (Hartcher-O’Brien et al., 2014). As well as the information redundancy occurring from one event, our brain utilizes the statistical structure of multiple events to perceive time optimally: the current percept is biased toward the mean of previous stimuli in a Bayesian manner (central tendency; Jazayeri & Shadlen, 2010). The present study investigates how these two strategies interact and shape an optimal timing behavior. In the experiment, we measured the timing sensitivity and the central tendency by the time discrimination task and the time reproduction task, respectively, for unisensory (auditory or visual) and multisensory stimuli. In the discrimination task, participants judged whether the standard duration (640 ms) was longer or shorter than the comparison duration (450-900 ms), and the timing sensitivity was defined by the slope of best-fit psychometric function. In the reproduction task, participants reproduced the stimulus duration (450-900 ms), and the central tendency was defined by the slope of the linear regression of the reproduced durations to the stimulus durations. In both tasks, the sensory uncertainty was systematically manipulated by adding noise. Psychophysical results demonstrated that the sensory uncertainty impairs the timing sensitivity and increases the central tendency bias, and that the multisensory timing improves both performance metrics compared to the unisensory timing. We computationally modeled the multisensory timing performance from experimentally obtained unisensory data, and revealed that the optimal multisensory integration precedes the Bayesian time estimation causing the central tendency. Our findings suggest that our brain incorporates the multisensory information and prior knowledge in a statistically optimal manner to realize precise and accurate timing behavior.

Hide abstract

 


P2.22 When does the brain integrate signals from vision and audition in line with the predictions of maximum likelihood estimation?

Meijer, D. & Noppeney, U.
University of Birmingham, UK

Show abstract

Multisensory perception is regarded as one of the most prominent examples for optimal human behaviour. Human observers are thought to integrate sensory signals weighted in proportion to their sensory reliabilities into the most reliable percept as predicted by maximum likelihood estimation (MLE). Yet, evidence to date has been inconsistent. Given the recent surge of interest in the limits and requirements of optimal human behaviour, we performed two experiments. The first study aimed to investigate motivation and training as potential factors that may influence whether or not observers integrate signals (MLE-)optimally in an audiovisual spatial localization task. Results indicated that half of the participants did not deviate significantly from MLE-based predictions, whereas the other half of the participants weighted the visual signals more than was predicted by reliability-weighted integration. Critically, we found no significant main effect for prior training or motivational reward: i.e. (sub-)optimal participants were spread equally across the groups. For the second study, we made some minor changes to the experimental design and found an increase in the number of participants that behaved (MLE-)optimally. Future studies are required to reinvestigate whether we may observe an effect of training or reward with the improved experimental design.

Acknowledgments: This study is funded by the European Research Council (ERC-2012-StG_20111109 multsens).

Hide abstract

 


P2.23 Revealing audiovisual integration with the drift diffusion model

Murray, C.A., Tahden, M.A.S., Glicksohn, A., Larrea-Mancera, S., Seitz, A.R. & Shams, L.
University of California, Los Angeles

Show abstract

When an object produces sensory stimulation in more than one modality, the detection and discrimination of the object can improve because multiple sources of information can be exploited by the nervous system to perform the task. This advantage has manifested both as improvement in reaction time (RT) and in accuracy. If this improvement in performance exceeds the probability sum of the accuracy or RT in unisensory conditions, it cannot be attributed to a statistical advantage due to redundancy; previous studies have then interpreted it as evidence of sensory integration. The vast majority of multisensory integration studies have historically focused on analyzing evidence of integration based on either accuracy or RT (but see Drugowitsch et al, 2014; Rach, Diedrich, Colonius, 2011, Thelen, Talsma, & Murray, 2015). Here we present results that show that this approach may be inadequate in identifying and characterizing multisensory integration. The speed-accuracy trade-off may mask the advantage of multisensory integration (Drugowitsch et al, 2014,), and therefore, to uncover the integration, a model that takes both accuracy and RT into account should be employed. For this purpose, we employed a Bayesian, hierarchical variant of a drift diffusion model (Wiecki, T.M., Sofer, I., & Frank, M.J., 2013). The model utilizes accuracy and RT data to fit decision parameters that differentiate between decision-making components, including stimulus discriminability and participant bias. In a task that required participants to detect auditory, visual, and audiovisual pulses, model results indicate drift rate, which is correlated to stimulus discriminability, was higher for audiovisual pulses compared to both auditory and visual pulses. We will also present model results from other tasks. The results indicate that participants may be integrating the stimuli, even though neither accuracy nor RT data alone provide evidence of integration.

Hide abstract

 


P2.24 How input modality and visual experience affect the neural encoding of categorical knowledge

Mattioni, S., Rezk,M., Cuculiza Mendoza, K., Battal, C., Bottini, R., van Ackeren, M., Oosterhof, N.N. & Collignon, O.
UcL- Université catholique de Louvain

Show abstract

Is conceptual knowledge implemented in the brain based on representation that are abstracted from any sensory features, or, alternatively, relies on the activation of representations that are bounded to the input modality and based on sensory experience? To test these conflicting views, we used fMRI to characterize the brain responses to 8 conceptual categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that the right posterior middle temporal gyrus (rpMTG) is the region that most reliably decode categories and selectively correlate with conceptual models of our stimuli space independently of input modality and visual experience. However, rpMTG maintained separate representational format between audition and vision, suggesting distinct representational geometries across the senses. We also observed a robust enhancement in decoding auditory categories in the occipital cortex of blind individuals. Interestingly, this effect was lateralized to the right hemisphere. We then correlated the representational geometries extracted from the sighted group in vision with those from separate groups of blind and sighted in audition in regions that typically show categorical preference for faces (FFA), tools (LO) and scenes (PPA). We found a correlation between the visual and the auditory representational geometry of the stimuli in both hemispheres for the blind, but only in the left hemisphere for the sighted. All together these results demonstrate how input modality and sensory experience impact on the neural implementation of categorical representations and highlight hemispheric asymmetries in their expression.

Hide abstract

 


P2.25 Short and long-term visual deprivation leads to adapted use of audiovisual information for face-voice recognition

Moro, S.S., Hoover, A.E.N. & Steeves, J.K E.
Centre for Vision Research, York University

Show abstract

Person identification is essential for everyday social interactions. We quickly identify people from cues such as a person’s face or the sound of their voice. A change in sensory input, such as losing one’s vision, can alter how one uses sensory information. We asked how people with only one eye, who have had reduced visual input during postnatal maturation of the visual system, use faces and voices for person identity recognition. We used an old/new paradigm to investigate unimodal (visual or auditory) and bimodal (audiovisual) identity recognition of people (face, voice and face-voice) and a control category, objects (car, horn and car-horn). Participants learned the identity of 10 pairs of faces and voices (Experiment 1) and 10 cars and horns (Experiment 2) and were asked to identify the learned face/voice or car/horn among 20 distractors. People with one eye were more sensitive to voice identification compared to controls viewing binocularly or with an eye-patch. However, both people with one eye and eye-patched viewing controls use combined audiovisual information for person identification more equally than binocular viewing controls, who favour vision. People with one eye were no different from controls at object identification. The observed visual dominance is larger for person compared to object identification, indicating that faces (vision) play a larger role in person identification and that person identity processing is unique from that for objects. People with long-term visual deprivation from the loss of one eye may have adaptive strategies, such as placing less reliance on vision to achieve intact performance, particularly for face processing.

Hide abstract

 


P2.26 An Electroencephalography Investigation of the Differential Effects of Visual versus Auditory Distractors on Crossmodal Temporal Acuity

Kwakye, L.D., Hirabayashi, K.K., Barnes-Scott, Z. & Papadakis, S.L.
Oberlin College

Show abstract

Our perception of the world hinges on our ability to accurately combine the many stimuli in our environment. This multisensory integration is highly dependent on the temporal relationship between unisensory events and our brain’s ability to discern small timing differences between stimuli (crossmodal temporal acuity). Our previous research investigated whether attention alters crossmodal temporal acuity using a crossmodal temporal order judgment (CTOJ) task in which participants were asked to report if a flash or beep occurring at different time intervals appeared first while concurrently completing either a visual distractor or auditory distractor task. We found that increasing the perceptual load of both distractor tasks led to sharp declines in participants’ crossmodal temporal acuity. The current study uses electroencephalography (EEG) to understand the neural mechanisms that lead to decreased crossmodal temporal acuity. Participants completed a CTOJ task in association with a visual distractor task, as described above, while EEG activity was recorded 64 scalp electrodes. EEG activity was averaged based on the onset of the flash producing an event-related potential (ERP) waveform for each perceptual load level and stimulus onset asynchrony (SOA) combination. We found that increasing perceptual load most strongly influences the amplitude of the N1/P2 complex in response to the flash across parietal electrodes. This suggests that decreases in crossmodal temporal acuity with increasing visual load may be mediated by alterations to visual processing. Ongoing data collection investigates whether increasing auditory load will lead to alterations in auditory processing, thus suggesting a modality-specific mechanism for disruptions in crossmodal temporal acuity. Preliminary data analysis suggests different changes in the neural processing of audiovisual stimuli with increases in auditory load as compared to visual load. This line of research serves to illuminate the neural networks that underlie the interaction between attention and multisensory integration.

Hide abstract

 


P2.27 Perceived simultaneity of audio-visual events depends on the relative stimulus intensity.

Horsfall, R.P., Wuerger, S.M. & Meyer, G.F.
University of Liverpool

Show abstract

Purpose. Simultaneity judgements (SJ) and temporal order judgements (TOJ) are often used to characterise audio-visual integration mechanisms. The resulting points of subjective simultaneity (PSS) have been shown to be uncorrelated, suggesting different underlying mechanisms. The multisensory correlation detector (MCD) model (Parise & Ernst, 2016) accounts for this lack of correlation by assuming identical early processing mechanisms but different task-specific weightings. The aim of our experiments was to explore the effect of the relative intensity of the unimodal signals on the PSS.

Methods. 34 observers (20-69 y.o.a.) performed both SJ and TOJ tasks with identical flash/bleep stimuli (100ms) with varying stimulus onset asynchronies (-200ms AV to +200 VA) and two flash intensities (1.1cd/m2 or 366cd/m2). In the TOJ task, participants judged whether the audio or the visual stimulus came first; in the SJ task, whether the stimuli occurred simultaneously or separately. The PSS was defined as the SOA corresponding to maximum of the SJ curve, and the 50% point of the TOJ curve.

Results. (1) Consistent with previous reports, no correlation was found between the PSSs of the SJ and TOJ tasks. (2) Flash intensity had an effect: the PSS shifts from 20.9ms (dim) to 7.6ms (bright) in the SJ, and from 17.2ms (dim) to -7.4ms (bright) in the TOJ. (3) The effect of intensity is asymmetric around the PSS and is more pronounced for visual leading stimuli. (4) When an early non-linearity in the unimodal signals is added to the MCD model, the intensity-dependent shift in the PSS is predicted, but the observed asymmetry is not captured.

Conclusion. Our findings constrain possible models of intensity-dependent audio-visual integration mechanisms by ruling out low-level mechanisms as the sole explanation. We speculate that attentional enhancement of the visual signal by the auditory signal may play a role.

Acknowledgments: I would like to thank Kielan Yarrow for help with the model fitting.

Hide abstract

 


P2.28 Hearing that voice and seeing that face: the role of non-affective characteristics in person identification.

Jicol, C., Little, A.(1), Petrini, K.(1,2) & Proulx, M.J.(1,2)
(1,2) Joint senior authors

University of Bath

Show abstract

Most research on how vocal and facial cues can be correlated has considered affective content as the most influential factor determining these associations. However, due to expertise in vocal and facial integration in day-to-day life, individuals may also associate

certain non-affective characteristics of one cue with those of the second, especially in cases where only one modality is available (e.g. vocal when talking on the phone). The current study investigated whether non-affective face characteristics predicted associations with voices based on the same characteristics perceived in voices varying in emotion. First, in a rating study, 10 neutral voice utterances and 30 neutral faces were rated by 110 participants on masculinity, femininity, attractiveness, trustworthiness, age and body mass. In a matching study, these faces and voices were presented to another 49 participants who were asked to match each face with one voice. A second matching study used happy and sad versions of the same 10 voices and asked another 112 participants to perform the same matching task as per the neutral study. For both studies, the perceived characteristics (e.g. masculinity and attractiveness) well correlated between most matched faces and voices, for neutral, happy and sad voices for male and female judges. Results showed consistent patterns of associations, indicating that non-affective cues can cause specific faces to be matched with certain voices. Some pairings were dependent on the emotion of the voice and were consistent across genders, while others only varied across genders, but not emotion. Overall, males made most matching choices based on body mass and females on age. These findings have relevance for the development of sensory substitution devices aimed at blind and visual impaired individuals as they point at the most relevant information used to form a certain facial representation given a certain voice.

Hide abstract

 


P2.29 Development of cultural differences in emotion perception from faces and voices

Tanaka, A., Kawahara, M. & Sauter, D.
Department of Psychology, Tokyo Woman’s Christian University, Tokyo, Japan

Show abstract

Recent studies have demonstrated cultural differences in multisensory emotion perception from faces and voices. Tanaka et al. (2010) showed that Japanese people are more tuned than Dutch people to vocal processing in adults. The current study investigated how such a cultural difference develops in children and adults. In the experiment, Japanese and Dutch participants observed affective expressions of both Japanese and Dutch actors. A face and a voice, expressing either congruent or incongruent emotions, were presented simultaneously on each trial. Participants judged whether the person is happy or angry. Results in incongruent trials showed that the rate of vocal responses was higher in Japanese than Dutch participants in adults, especially when in-group speakers expressed a happy face with an angry voice. The rate of vocal responses was very low and not significantly different between Japanese and Dutch 5-6-year-olds. However, it increased over age in Japanese participants, while it remained the same in Dutch participants. These results suggest the developmental onset of cultural differences in multisensory emotion perception.

Hide abstract

 


P2.30 Sensory Rate Perception – Simply the sum of its parts?

Motala, A. & Whitaker, D.
Cardiff University

Show abstract

Previous experiments utilising the method of sensory adaptation have presented evidence towards a temporal ‘rhythm aftereffect’. Specifically, adapting to a fast rate makes a moderate test rate feel slow, and adapting to a slow rate makes the same moderate rate feel fast. The present work aims to deconstruct the concept of rhythm and clarify how exactly the brain processes a regular sequence of sensory signals. We ask whether there is something special about ‘rhythm’, or whether it is simply represented internally by a series of ‘intervals’. Observers were exposed to a sensory rhythm of either auditory or visual temporal rates (a ‘slow’ rate of 1.5Hz and a ‘fast’ rate of 6Hz), and were tested with single empty intervals of varying durations. Results show adapting to a given rate strongly influences the temporal perception of a single empty interval. This effect is robust across both, interval reproduction and two-alternative forced choice methods. These findings challenge our understanding of temporal rhythms and suggest that adaptive distortions in rhythm are, in fact, distortions to repeatedly presented uniform intervals composing those rhythms.

Hide abstract

 


P2.31 Multi-modal representation of visual and auditory motion directions in hMT+/V5.

Rezk, M., Cattoir, S., Battal, C. & Collignon, O.
Catholic University of Louvain (UCL), Belgium

Show abstract

The human middle temporal area hMT+/V5 has long been known to code for the direction of visual motion trajectories. Even if this region has been traditionally considered as purely visual, recent studies suggested that hMT+/V5 could also selectively code for auditory motion. However, the nature of this cross-modal response in hMT+/V5 remains unsolved. In this study, we used functional magnetic resonance imaging to comprehensively investigate the representational format of visual and auditory motion directions in hMT+/V5. Using multivariate pattern analysis, we demonstrate that visual and auditory motion direction can be reliably decoded inside individually localized hMT+/V5. Moreover, we could predict the motion directions in one modality by training the classifier on patterns from the other modality. Such successful cross-modal decoding indicates the presence of shared motion information across the different modalities. Previous studies used successful cross-modal decoding as a proxy for abstracted representation in a brain region. However, relying on series of complementary multivariate analysis, we unambiguously show that brain responses underlying auditory and visual motion direction in hMT+/V5 are highly dissimilar. For instance, our results demonstrated that auditory motion direction patterns are strongly anti-correlated with the visual motion patterns, and that the two modalities can be highly discriminated based on their activity patterns. Moreover, representational similarity analyses demonstrated that modality invariant models poorly fitted our data while models assuming separate pattern geometries between audition and vision strongly correlated with our observed data. Our results demonstrate that hMT+/V5 is a multi-modal region that contains motion information from different modalities. However, while shared information exists across modalities, hMT+/V5 maintains highly separate response geometries for each modality. These results serve as a timely reminder that significant cross-modal decoding is not a proxy for abstracted representation in the brain.

Acknowledgments: Fonds national de la recherche scientifique (FNRS)

Hide abstract

 


P2.32 Changes in resting-state connectivity in deaf individuals after learning a second (sign) language

Cardin, V., Kremneva, E., Komarova, A., Vinogradova, V., Davidenko, T., Turner, B. & Woll, B.
University of East Anglia

Show abstract

Studies of neural reorganisation as a consequence of early deafness show that regions in the superior temporal cortex, which are usually involved in speech processing in hearing individuals, are involved in sign language processing in deaf individuals. Posterior STC (pSTC) is also recruited for visual working memory processing in deaf individuals. This is accompanied by increased resting-state connectivity between pSTC and frontoparietal regions in deaf individuals.
Here we are interested in understanding whether early deafness results in a reorganisation of brain regions and networks involved in language learning. Specifically, we investigate whether there is an increase in resting-state connectivity between pSTC and frontoparietal regions due to second language learning. Language can be produced and perceived in several modalities — spoken, signed, written.
However, most language research, including that of language learning, focuses on the study of spoken languages. With this approach, it is not possible to disentangle which processes are related to the sensorimotor processing of the spoken and written language signal, and which, if any, reflect abstract linguistic processing. Here we studied changes in resting state connectivity in groups of deaf individuals as they learned, in a naturalistic setting, a second sign language (L2). Participants were deaf signers of Russian Sign Language who enrolled in a course equivalent to Level 1 British Sign Language. Resting state fMRI scans were collected before, during and after the BSL course.
Preliminary results showed no difference in connectivity between pSTC and frontoparietal regions after language learning. Instead, there was an increase in connectivity between anterior temporal and frontal regions in deaf individuals after learning a sign language as an L2. These findings are in agreement with previous studies of spoken L2 learning, suggesting that these regions are involved in language learning independently of language modality.

Acknowledgments: FUNDING: Russian Science Foundation Grant No. 16-18-00070

Hide abstract

 


P2.33 Sight restoration in congenitally blind individuals: multisensory perception for action execution

Senna, I., Pfister, S. & Ernst, M.
Ulm University, Germany

Show abstract

In our daily life we easily integrate vision with other sensory signals (e.g., proprioception) to plan and guide actions. When a systematic error is introduced, for instance by means of prism goggles shifting the apparent location of a target, humans can still reach for the target by easily recalibrating the visuo-motor system. In the present study we investigated whether newly-sighted individuals (i.e., born with bilateral cataract, and surgically treated after years of visual deprivation) are able to recalibrate the sensory-motor systems and thus minimise the systematic error.

Compared to typically developing individuals, who quickly adapted to the visual shift, newly-sighted were less able to recalibrate the sensorimotor system: they partially reduced the error, but they did not fully adapt to the visual shift. The present finding cannot be explained just by the fact that newly-sighted individuals have lower visual acuity than controls. First of all, the ability to adapt to the visual shift correlated not only with visual acuity, but also with time since surgery. Moreover, blurring vision in sighted controls did not impair controls’ ability to adapt to the visual shift: although such procedure made the recalibration rate slower, controls still adapted more and faster than newly-sighted individuals. Finally, some cataract patients could be tested right before and right after surgery, and their performance did not improve immediately after surgery, despite a significant improvement of their visual acuity.

These results show that newly sighted individuals can only partly make use of visual feedback to correct motor performance. Such ability is not present immediately after surgery, but it seems to require time (and thus some sensory-motor experience) to develop.

Hide abstract

 


P2.34 Increased recruitment of rSTS for tactile motion processing in early deaf individuals

Scurry, A.N., Huber, E. & Jiang, F.
University of Nevada, Reno

Show abstract

Upon early sensory deprivation, the remaining intact modalities often exhibit cross-modal reorganization. For instance, early deaf (ED) individuals reveal recruitment of auditory cortex for visual motion processing as compared to normal hearing (NH) controls. Previous studies of compensatory plasticity in early deaf individuals have tended to focus on visual spatial processing with less attention given to the tactile modality. Therefore in the current study, we aimed to examine the effects of early auditory deprivation on tactile motion processing. An experimenter simulated 4 directions of tactile motion on the right index finger of 5 ED and 5 NH controls who were asked to detect infrequent trial blocks (less than 10%) that contained repeated motion direction. Using a modified population receptive field (pRF) analysis method that assumed a one dimensional Gaussian sensitivity profile on the tactile motion direction, we characterized tactile motion responses in anatomically-defined primary and secondary somatosensory cortices (SI and SII, respectively), primary auditory cortex (PAC), and functionally-defined superior temporal sulcus (STS) based on visual motion responses. As expected, similar direction-selective responses were found within SI and SII between the two groups. We also found significant but minimal responses to tactile motion within PAC for all subjects. While ED individuals show significantly larger recruitment of right STS (rSTS) upon tactile motion stimulation, there was no evidence of directional tuning in this region as revealed by pRF analysis. Greater recruitment of rSTS upon tactile motion is in line with findings from animal studies investigating cortical reorganization in multisensory areas. The presence of tactile motion responses with no clear directional tuning in rSTS suggests a more distributed population of neurons dedicated to processing tactile spatial information as a consequence of early auditory deprivation.

Acknowledgments: This work has been supported by EY023268 to Fang Jiang

Hide abstract

 


P2.35 Elucidating responses to non-visual motion cues in hMT+ of early blind and sighted adults.

Barrett, M.M.(1) & Rauschecker, J.P.(1,2)
1 Laboratory for Integrative Neuroscience and Cognition; Department of Neuroscience; Georgetown University Medical Center
2 Institute for Advanced Study, Technische Universität München

Show abstract

In sighted individuals, BOLD activity can be observed in a middle temporal cortical region, known as hMT+, in response to moving visual stimuli. There is evidence to suggest that this region may be a multimodal area, with fMRI studies showing BOLD responses in hMT+ to both tactile and auditory motion cues in addition to visual responses (Hagen et al., 2002; Poirier et al., 2005). Furthermore, research has shown that hMT+ responds to auditory and tactile motion in early blind individuals (Jiang, Stecker, & Fine, 2014; Matteau, Kupers, Ricciardi, Pietrini, & Ptito, 2010). However, a more recent study showed that areas within hMT+ in sighted individuals which respond to moving visual stimuli are not recruited when processing tactile motion information if BOLD activation is mapped on individual subjects (Jiang, Beauchamp, & Fine, 2015). We wanted to assess whether this would also be the case for auditory and tactile motion cues in early blind individuals, as hMT+ responds to both tactile and auditory motion cues in this cohort. Our study also aimed to elucidate whether modulation of BOLD activity in hMT+ would be observed when auditory and tactile motion cues are presented together. In addition, we compared how this region responds to non-visual motion input in sighted subjects. Conjunction analyses revealed that activation in response to tactile and auditory motion cues overlaps within hMT+ in early blind individuals at the single subject level. In the sighted group, BOLD activity was not observed in hMT+ in response to auditory motion stimuli. Modulation of BOLD activity was found in hMT+ when audio-tactile motion cues were presented to early blind adults. The results of this study provide evidence of latent multisensory inputs to visual cortex and also inform the design of sensory substitution devices by demonstrating how multisensory cues can impact the effectivity of these devices.

Acknowledgments: This study was supported by NIH Grant R01 EY018923 to J.P. Rauschecker.

Hide abstract

 


P2.36 Peripheral, task-irrelevant sounds activate contralateral visual cortex even in blind individuals.

Amadeo, M.B., Stӧrmer, V.S., Campus C. & Gori, M.
Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy

Show abstract

Recent findings suggest that peripheral, task-irrelevant sounds elicit activity in contralateral visual cortex, as revealed by a sustained positive deflection in the event-related potential (ERP) over the occipital scalp contralateral to the sound’s location (McDonald et al., 2013). This Auditory-evoked Contralateral Occipital Positivity (ACOP) appears between 200–450ms after sound onset, and is present even when the task is entirely auditory and no visual stimuli are presented at all. Here, we investigate whether this cross-modal activation of contralateral visual cortex is mediated by visual experience.

To this end, ERPs were recorded in 12 early blind subjects during an unimodal auditory task. Participants sat 180 cm away from a set of speakers and listened to a stream of sounds that were presented in random order and at unpredictable times (variable inter-stimulus-interval). The auditory stream included task-irrelevant noise bursts delivered from the left or right sides (i.e. ±25° eccentricity) and 1000Hz target tones delivered from the center (i.e. 0° eccentricity; similar to McDonald et al., 2013). Participants were instructed to press a button every time they heard a central target tone, while ignoring the peripheral noise bursts. The EEG analysis focused on the ERPs triggered by the task-irrelevant noise bursts. It was found that these noise bursts elicited an ACOP, indicating that peripheral sounds can enhance neural activity in visual cortex in a spatially specific manner even in visually deprived individuals.

In conclusion, the cross-modal activation of contralateral visual cortex triggered by peripheral sounds does not require any visual input to develop. Our results are in line with a growing body of literature showing a strong and reliable response to sounds in the primary visual cortex of blind individuals (Amedi et al., 2007, Lane et al., 2015, Bedny et al., 2011, Roder et al., 2002, Kujala et al., 1995, Focker et al., 2012).

Hide abstract

 


P2.37 Audio-Spatial Representation is Altered in Patients with Central Scotoma

Ahmad, H., Setti, W., Capris, E., Facchini, V. & Gori, M.
Italian Institute of Technology (IIT), Genova, Italy

Show abstract

Sound localization is a skill developed in blind individuals in order to perceive space around them (Collignon et al. 2005, Renier, Collignon et al. 2005, Striem-Amit and Amedi 2014). This plastic change is developed in auditory cortex after the loss of visual modality. Macular Degeneration (MD) is a retinal disorder creating “blind spots” on retina that results in cutting visual inputs at corresponding visual representations. This study investigates if the absence of vision due to a central scotoma (blind spot) induces a change in audio spatial modality in patients having central MD. We investigated sound localization in 16 MD patients (age range 14-87 years; mean age = 66.875 years) suffering from central vision loss, and a control group of 16 age-matched (p > 0.05) sighted controls using an array of 25 haptic blocks with loudspeakers in the center, arranged in the form of a matrix. Subjects were asked to fixate at the central block of matrix while localizing sounds coming from other speakers (central or peripheral) of the matrix. We found that patients with central vision loss are more frequently attracted towards the central speakers compared to peripheral ones (p < 0.05), whereas sighed group tend to perceive sounds all over the arena (P > 0.05) i.e. both central and peripheral speakers. These results supports our hypothesis that sound is attracted towards the blind zones. We suggest that this attraction of sounds towards scotoma could be a result of plasticity that recruits audio inputs on visual cortex after vision loss. We are performing EEG experiments to further support the idea that the recruitment of the visual cortex by audition after loss of vision is a fast plasticity mechanism that starts immediately after vision loss to support multisensory integration.

Acknowledgments: Claudio Campus and Guilio Sandini

Hide abstract

 


P2.38 Influence of visual experience on auditory spatial representation around the body

Aggius-Vella, E., Campus, C. & Gori, M.
Istituto Italiano di Tecnologia (IIT)

Show abstract

There is still a diatribe about the role of vision in calibrating auditory sense during spatial tasks. Some studies showed that blind people perform better than sighted people in localizing sounds (Collignon et al., 2006), while others found that the lack of vision leads to a spatial deficit in the auditory spatial bisection task (Gori et al., 2014). Our previous research found that sighted people perform better the auditory spatial bisection task in the frontal space, compared to the rear space. We discussed these results as the evidence of the important role of vision in calibrating hearing, providing a different auditory representation of the rear space, where vision is not available. To confirm that the difference between performances in the two spaces was due to vision, in the current study we investigated how the lack of vision affects audio spatial metric representations in the frontal and rear space by testing blind participants. Sighted and early blind participants were involved in a spatial bisection task and in two control tasks: a minimum audible angle and a temporal bisection task. As expected, both groups showed no differences between frontal and back space in the minimum audible angle and in the temporal bisection task. Contrarily, in the spatial bisection task, sighted and blind individuals behaved differently in the two spaces. While sighted subjects performed better in the frontal space than in the rear, no difference between spaces was found in the early blind group. Our results are in agreement with the idea that vision is important in developing auditory spatial metric representation. Moreover, we showed for the first time that the role of vision is specific for spaces where vision is naturally available, providing evidences that rear and frontal space are differently coded by brain on the base of different sensory input.

Hide abstract

 


P2.39 A comparison of neural responses to visual stimulation in congenitally deaf, neonataly deafened and hearing cats measured in MRI

Levine, A.T., Butler, B.A. & Lomber, S.G.
The University of Western Ontario

Show abstract

Normal brain development depends on early sensory experience. In the case of hearing loss, unutilised brain regions are seen to undergo plasticity and process signals from intact sensory inputs. Experiments with congenitally deaf human or animal subjects show evidence of improved abilities in peripheral visual motion detection and spatial localization. Such heightened behavioral performances in the visual domain are facilitated by compensatory plasticity occurring in deprived brain regions. Evidence of crossmodal plasticity is widely documented throughout the literature, however, not as much is known about plasticity occurring unimodaly (i.e. whether visual cortical representations of visually-evoked activity are altered in the deaf).

To address this, non-invasive high field functional magnetic resonance imaging (fMRI) was used in lightly anesthetized congenitally deaf, neonatally deafened, and hearing cats. BOLD percent signal change was measured during the presentation of a whole field visual circular flashing checkerboard, extending 16 degrees into the peripheral field. Fixation was confirmed by visually assessing the gaze of the cat through the scanner bore. Across the three groups, patterns of activation within thalamic as well as primary visual areas are compared to describe differential effects of early sensory loss across different levels of the visual processing hierarchy. Moreover, the degree to which auditory cortical regions show visually-evoked BOLD activity is compared. These whole-brain functional data are the first of their kind, and will further our understanding of both crossmodal and unimodal plasticity in the deaf brain.

Hide abstract

 


P2.40 Consonant-Order Reversals in the McGurk Combination Illusion

Gil-Carvajal, J. C., Dau, T. & Andersen, T.
Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark

Show abstract

Humans can integrate auditory and visual information when perceiving speech. This is evident in the McGurk effect, in which a presentation of e.g. auditory /aba/ and visual /aga/ leads to the audiovisually fused percept /ada/. With the pairing of auditory /aga/ and visual /aba/, however, the illusion takes the form of a combination percept of either /abga/ or /agba/. Here, we investigated how audiovisual timing influences the perceived order of the consonants in the McGurk combination. Stimuli were recorded with the consonants /g/ and /b/ using vowel-consonant-vowel (VCV) utterances with two syllabic contexts. First, the “internal timing” was studied by articulating the consonant to either emphasize the closing phase (VC-V) or the opening phase (V-CV). This produced cross-modally asynchronous consonants while maintaining synchrony of the vowels. Auditory /ag_a/ dubbed onto visual /a_ba/ was mostly heard as /agba/ whereas auditory /a_ga/ dubbed onto visual /ab_a/ was mostly heard as /abga/. Hence, syllabic context largely determined the perceived consonant order. Second, the effect of audiovisual stimulus onset asynchrony (SOA) was examined at five different SOAs, ranging from 200 ms auditory lead to 200 ms visual lead. The results showed no effect on the perceived consonant order but audiovisual SOAs influenced the strength of the illusion. Furthermore, we found that the window of integration is highly asymmetric for combination illusions and that the direction of the asymmetry depends on the perceived consonant order. We interpret the results as indicative of feature based audiovisual integration where formant transitions and aspirations are integrated separately.

Acknowledgments: This work was supported by the Oticon Centre of Excellence for Hearing and Speech Sciences (CHeSS) and by the Technical University of Denmark.

Hide abstract

 


P2.41 A probabilistic model for modulated speech encoding in the McGurk effect

Karthikeyan, G., Plass, J., Ahn, E., Rakochi, A., Stacey, W. & Brang, D.
University of Michigan

Show abstract

Viewing a speaker’s lip articulations during speech can affect what a listener perceives. In the McGurk effect, a mismatch between visual information extracted from lip articulations (e.g., the viseme /ga/) and auditory information extracted from the speech signals (e.g., the phoneme /ba/) results in the listener perceiving a “fusion” percept not present in either the auditory or visual signals (e.g., the phoneme /da/). Previous research has demonstrated that this effect involves a network of areas including posterior fusiform face areas, visual motion area MT, and temporal auditory areas, with lipreading information present even in early auditory areas. Nevertheless, the role of these separate regions in generating the McGurk effect remains poorly understood. Here, we utilize deep learning algorithms to examine whether probabilistic models of electrocorticographic (ECoG) activity in these regions can account for typical perceptual experiences observed in the McGurk effect. Patients were presented auditorily, visually, or audiovisually (congruent or incongruent) with four different phonemes. We used Convolutional Neural Networks (CNNs) to determine the probability with which the identity of each phoneme could be decoded based on auditory-alone trials, visemes, and congruent and incongruent audio-visual trials. The individual decoding probabilities of the auditory alone trials and visemes were then used to calculate a probability for comparison with the decoding probability of congruent and incongruent audio-visual trials across different brain regions and frequency bands. This analysis results in a computational model for the McGurk Effect that incorporates the unique contributions of multiple neural regions (MT, pSTS, STG) and neurophysiological mechanisms (low-frequency oscillations and local synaptic or spiking activity) in order to explain how multisensory speech modulates neural activity to produce modulated speech perception.

Hide abstract

 


P2.42 Word Frequency and the McGurk Effect

Dorsi, J., Rosenblum, L. & Chee, S.
UC Riverside

Show abstract

In the McGurk effect, visual speech can alter the perception of concurrently presented auditory speech (McGurk & MacDonald, 1976). For example, when auditory ‘Ba’ is dubbed onto visual ‘Va’ participants will often ‘hear’ the visual stimulus ‘Va’. Prior work has demonstrated that McGurk effects are stronger when they form words than when they form non-words (e.g. Brancazio, 2004). This work demonstrates that lexical information can influence the McGurk effect. The current project seeks to further quantify this influence by evaluating whether word frequency bears on the effect. A pilot experiment used 20 word pairs, each comprised of words differing only in the initial consonant, /B/ or /V/ (e.g. auditory “Base” + visual “Vase”). The data from this pilot study showed a robust correlation between the McGurk effect and the lexical frequency of the auditory and visual words, such that the McGurk effect is stronger when the visual word is more common (e.g. visual “Vase” [high frequency] produces more McGurk percepts than does visual “Versed” [low frequency]). The current work seeks to replicate this correlation with a larger set of words and generalize this finding to other initial consonant McGurk stimuli. Additionally, the current project examines how lexical frequency interacts with McGurk ‘fusion’ effects, in which the perceived consonant is different from both the auditory and visual stimuli (i.e. auditory /Bore/ + visual /Gore/ is perceived as /Door/; see McGurk & MacDonald 1976).

Hide abstract

 


P2.43 Synchronized visual and olfactory stimuli induce VR-based out-of-body experiences

Yasushi A. & Hiroki O.
Tokyo Institute of Technology

Show abstract

An Out-of-Body Experience (OBE) is a phenomenon in which a feeling arises as if a person sees herself/himself from the outside of the physical body. OBE is a phenomenon mainly occurring under special conditions. For example, Blank and colleagues reported that it causes OBE by giving electrical stimulation to “right angular gyrus”. Ehrsson and colleagues have been reported that by using VR, synchronization of visual stimuli and tactile stimuli causes OBEs. However, as far as the authors know, OBEs caused by synchronizing visual stimuli and sensory stimuli other than tactile stimuli have not been reported. In this study, we evaluated whether OBEs by synchronization of visual and olfactory stimuli occur by analyzing the evaluation of a questionnaire. To examine whether olfactory stimuli elicit OBEs, we conducted two conditions; one is that the olfactory and visual stimuli are presented synchronously, the other is that these stimuli are presented asynchronously. We also conducted a replication for the Ehrsson’s experiment. The questionnaire used for the evaluation consists of three test items (Q 1 to 3) and 7 control items (Q 4 to 10) and was analyzed by ANOVA whether there is a significant difference between the test and the control item. As a result of the experiment, there was a significant difference between the test items and the control items in the synchronous condition (p <0.01). When comparing the synchronous and asynchronous condition, the scores of the test items in the synchronous condition tended to be significantly higher or significantly higher than that of the asynchronous condition (Q 1, 2: p <0.10, Q 3: p <0.01 ). And the results of replication were consistent with previous studies. Our results suggest that olfactory stimulation induce an Out-of-Body Experience.

Acknowledgments: This work was partly supported by JSPS KAKENHI Grant Number 16H06789 and JST-COI.

Hide abstract

 


P2.44 OLFACTORY INPUT INFLUENCES INTRANASAL SOMATOSENSORY PERCEPTION

Karunanayaka P., Lu J. & Sathian K.
Department of Radiology, Penn State College of Medicine

Show abstract

It is well known that odor perception is influenced by intranasal somatosensory input, e.g. with sniffing. A considerable body of earlier work has shown that, when pure olfactory inputs are presented monorhinally to humans, the side of stimulation cannot be localized reliably, while concomitant intranasal somatosensory stimulation via air-puffs or sniffs enables reliable localization. However, it is not clear whether olfactory input modulates perception of intranasal somatosensory stimuli. Here we investigated this issue in healthy humans with normal olfactory function, using the odorant phenyl ethyl alcohol (rose) and somatosensory stimulation with weak air-puffs delivered intranasally. Visual cues were used to inform participants to briefly hold their breath while weak air-puffs were delivered to either nostril, in the presence or absence of the odorant. In a two-alternative forced choice, participants indicated whether their perception of the air-puff was in the left or right nostril. Consistent with prior research, localization accuracy was essentially at chance in a control condition when the odorant was delivered alone, without an air-puff. Yet, the combination of the odorant and a weak air-puff in the same nostril significantly improved localization accuracy for the air-puff, relative to presentation of the air-puff without the odorant. This enhancement of somatosensory localization was absent when the air-puff and odorant were presented to different nostrils, arguing against a non-specific alerting effect of the odorant. Thus, olfactory input does indeed influence processing of intranasal somatosensory stimuli. It remains for future work to establish the locus of this multisensory interaction and to clarify the underlying neural mechanism.

Hide abstract

 


P2.45 Party music and drinking decisions: multisensory effects of alcohol-related cues

James, T.W. & Nikoulina, A.
Indiana University

Show abstract

Decisions are based on integration of sensory signals from multiple sources that make up the environmental context. Individuals with alcohol use disorder show hyper-reactivity to alcohol-related sensory cues from different sensory sources, however, little is known about how these individuals integrate multiple alcohol-related sensory cues to make drinking decisions. Recently, our lab has developed a paradigm for studying how drinking decisions are influenced by visual alcohol-related cues. Here, we extended that paradigm to the multisensory realm by pairing visual alcohol cues with party music (i.e., music highly associated with heavy-drinking environments). Subjects were all young adult women, separated into heavy- and light-drinking cohorts. Subjects were asked to list their favorite songs for ‘going out’ (henceforth, party music) and ‘staying in’ (home music), which were then played as song clips during the task. For the task, participants were asked to imagine themselves in one of several risky scenarios while music played in the background. They were shown visual alcohol and food cues and were asked to report their likelihood of drinking or eating the pictured item. We found that listening to party music increased the likelihood of a risky decision (over home music or no music) significantly more for alcohol cues than food. Party music did not influence heavy and light drinkers differently, even though heavy drinkers were much more likely than light drinkers to endorse alcohol over food decisions. The results show that auditory and visual alcohol-related cues interact to influence decisions. The results highlight the importance of considering domain-specific sensory experience and associations when studying decision-making in context.

Hide abstract

 


P2.46 Differential effects of music and pictures on taste perception – an fMRI study

Callan, A., Callan, D. & Ando, H.
National Institute of Information and Communications Technology

Show abstract

Similarly to other sensory modalities, gustatory perception is multimodal in nature. It is easy to imagine that the looks and smells of food as well as the sounds of cooking/eating influence our gustatory perception. However, it is hard to believe that sounds that are not directly related to food can affect how we taste. Crisinel et al. (2012) demonstrated that the pitch of background music affects taste perception. People tasted a piece of toffee sweeter with higher-pitched background music and bitterer with lower-pitched background music. In this fMRI study, we investigated how indirect-auditory-taste cues and direct-visual-taste cues modulate neural activity in the primary gustatory cortex without taste stimuli. The high-pitched music and dessert pictures were used to reflect sweet taste and the low-pitched music and meal pictures were used to reflect non-sweet taste. Stimuli were presented in auditory-only, visual-only, or audio-visual conditions. Results from the auditory-only condition indicated that the sweet music enhanced activity in the posterior insula (pIns) than the non-sweet music. In the visual-only conditions, no significant differences were found. Paired comparisons of the audio-visual conditions showed significant differences. The sweet-music and dessert-picture condition activated the right pIns more than the sweet-music and meal-picture condition. In contrast, the non-sweet-music and meal-picture condition activated the left anterior Insula (aIns) more than the sweet-music and meal-picture condition. Regions of interest analyses revealed different patterns of activity in the right pIns and left aIns. The right pIns was activated by the sweet music but not by the non-sweet music. These results suggest that enhanced activity in the pIns was caused by the sweet music and that presentation of non-sweet meal pictures suppressed the activity. On the other hand, both types of food pictures activated the left aIns and the activity was enhanced by simultaneous presentation of gustatory congruent music.

Hide abstract

 


P2.47 Comparing the effects of vision and smell in red wine quality judgments by experts: constrained tasting vs. unconstrained tasting

Caissie, A., De Revel, G. & Tempère, S.
Univ. Bordeaux, ISVV, EA 4577 OEnologie, F-33140, Villenave d’Ornon, France

Show abstract

In this study, we evaluated the contributions of vision and smell to red wine quality judgments by expert wine tasters. We compared responses in two unconstrained (i.e., global) and two constrained (i.e., vision only and smell only) wine tastings. In each tastings, 47 wine tasters (18 women) were instructed to rate 20 red wines successively according to five continuous response scales: Arousal, Quality, Certainty, Image and Hedonism. Arousal was defined as the strength of the sensory response (lower vs. higher) to the wine being tasted. Quality was defined as the degree of exemplarity or liking of the wine being tasted, to a pre-conditioned quality standard (poor example vs. good example of quality). Ratings about the wine’s potential to evoke images (low image vs. high image) and hedonism (dislike vs. like) were collated. Wine tasters also rated the certainty of their quality ratings. A priori designations of quality served as the criteria for the red wines. Each wine belonged to the same sensory space (i.e., secondary wines vs. premium wines) from a Protected Designation of Origin (PDO) for which the wine tasters had prior experience. Overall, our results suggest a coherent quality concept across unconstrained and constrained wine tastings, with a clear distinction favoring premium wines on all scales. However, we observed modality specific effects in arousal and evoked images, as well as certainty. Wine tasters were less certain about their quality judgments in constrained tastings compared to unconstrained (global) tastings. They also reported less arousal and less evoked images with vision compared to smell. Repeatability in ratings, from unconstrained to constrained (Global-Visual, Global-Smell) tastings, suggested more stability across modalities for arousal, quality and evoked images of premium wines, compared to secondary wines. Visual judgments were better associated to global judgments (Global-Visual) when compared to smell (Global-Smell).

Hide abstract

 


P2.48 Acute pain does not disrupt updating of peripersonal space and body representation

Vittersø, A., Halicka, M., Proulx, M.J., Wilson, M., Buckingham, G. & Bultitude, J.
University of Bath, UK; University of Exeter, UK

Show abstract

The multisensory representations of our body and its surrounding space are constantly updated as we interact with objects in our environment, for instance during active tool-use. People with chronic pain conditions can present with distorted representations of their body and peripersonal space when compared to pain-free individuals. It has been suggested that disruption to the processes involved in updating these representations could underlie some of these painful conditions. However, it is not known why such updating problems might occur: for example, if they reflect a difference in cognitive processing that pre-date the development of pain, or if they are a consequence of pain. To test the latter, we induced acute pain in healthy individuals using 1% Capsaicin cream and examined its effect on participants’ abilities to update the representation of their body and peripersonal space during tool-use. Updating of the body representation was examined by comparing tactile distance judgements on participant’s arms before and after tool-use. Updating of peripersonal space representation was examined during active tool-use by measuring changes in reaction times and error rates for decisions made about vibro-tactile stimuli presented through the handles of the tools in the presence of visual distractors at the tips of the tools. Acute pain did not alter performance on either task when compared to active placebo cream and a neutral control condition. This suggests that acute pain is not sufficient to account for the distorted representations of the body and its surrounding space observed in people with painful conditions.

Acknowledgments: Supported by the GW4 BioMed Medical Research Council (UK) Doctoral Training Partnership.

Hide abstract

 


P2.49 Visual Assessment of Tactile Roughness Intensity

Kim, J.(1,2,3), Bülthoff, I.(1) & Bülthoff, H.H.(1)
1. Max Planck Institute for Biological Cybernetics, Tübingen, Germany
2. Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Republic of Korea
3. Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea

Show abstract

A number of neuroimaging studies have consistently reported significant activations in human somatosensory cortices during observation of touch actions. However, it is still debated which brain region is mainly associated with the processing of observed touch (e.g. primary somatosensory cortex; S1, secondary somatosensory cortex; S2, posterior parietal cortex; PPC). In this fMRI study, we searched for brain regions exhibiting neural activity patterns encoding visually evoked roughness intensities. Fifteen healthy volunteers with no deficits in tactile and visual processing participated. They first explored a set of differently colored sandpapers with their right index fingertip outside of the MR room. During the fMRI experiment, video clips of tactile explorations of the sandpaper set were presented and the participants were asked to recall the perceived roughness intensity as vividly as possible. The neural representations of the roughness intensities could be successfully decoded from the brain signals elicited by the video clips in the absence of any intrinsic tactile content. In particular, a random-effects group analysis revealed that four brain regions encoded the different roughness intensities distinctively: The bilateral PPC, the primary visual cortex (V1), and the ipsilateral S1. Although we found brain activations in ipsilateral S1, we cannot confirm the S1 engagement because the majority of previous studies have reported brain activations in contralateral S1. Significant decoding accuracies in V1 may be attributed to differences of visual contents in the presented video clips. Therefore, among the three brain regions mentioned above, our findings supported the hypothesis that especially the PPC plays an important role in the processing of observed touch.

Hide abstract

 


P2.50 Predicting the endpoint of an ongoing reaching movement: You need more than vision but do you really need to plan the action?

Kumawat, A.S., Manson, G.A., Welsh, T N. & Tremblay L.
Centre for Motor Control, Faculty of Kinesiology & Physical Education, University of Toronto

Show abstract

The motor commands generated prior to voluntary actions are deemed important for the control of voluntary movements (Wolpert & Ghahramani, 2000). According to the multiple processes of online control model (Elliott et al., 2010), the earliest phase of an ongoing movement (i.e., impulse regulation) indeed relies on the efferent commands while the following phase (i.e., limb-target regulation) only requires vision and proprioception. The purpose of this study was to provide original empirical evidence regarding the importance of visual and proprioceptive feedback availability vs. the efferent command to make endpoint error predictions (i.e., limb-target regulation). Visual information was limited to a brief window of vision (40 ms) early in the trajectory, as it has been shown that vision provided early in the movement could be utilized in making accurate endpoint error judgements while also allowing for the assessment of online feedback utilization. In the experimental conditions, participants: a) reached actively to a target (efference + vision + proprioception: EVP); b) were guided to the target using a robotic arm (vision + proprioception: VP); and c) observed a fake hand guided to a target (vision only: V). The limb trajectories from the active condition (EVP) were used to program the robot for the two other experimental conditions (VP, V), so the trajectories in the robot-guided conditions were those of the participant themselves. After each trial, participants reported if the hand undershot or overshot the target and the accuracy of these judgements was analysed. Participants’ endpoint error predictions were better in active (EVP) compared to both robot-guided conditions (VP & V) and better with both vision and proprioception (VP) than with vision alone (V). Thus, online limb-target regulation processes may not only rely on vision and proprioception but also on the efferent command.

Acknowledgments: Natural Sciences and Engineering Research Council of Canada (NSERC), University of Toronto (UofT), Canada Foundation for Innovation (CFI), Ontario Research Fund (ORF).

Hide abstract

 


P2.51 The duration aftereffect occurs in tactile modality

Li B. & Chen L.
School of Psychological and Cognitive Sciences, Peking University

Show abstract

Adaptation to a relatively short or long stimulus leads to a robust repulsive duration aftereffect. This illusory temporal perception has been shown in visual and auditory modalities, which is hardly transferrable between the two modalities. Here, we investigated whether the duration aftereffect could be realized in the tactile modality. We implemented two experiments. In Experiment 1, we used perceptual estimation in which participants compared the duration of individual test stimulus with reference to the mean duration of a group of test stimuli (the method of single stimuli). In Experiment 2, we adopted temporal reproduction paradigm in which participants pressed and released a button to generate the duration to be equivalent to the duration of test stimulus as accurately as possible.

We replicated the repulsive effect as in auditory/visual modalities: adaptation to a relatively short vibrotactile stimulus resulted in perceiving long duration for the subsequent vibrotactile stimulus, and vice versa for the adaptation to long stimulus (Exp.1). Moreover, reproduced duration was significantly longer after adaptation to relatively short vibrotactile stimulus than the one after long stimulus (Exp.2). These findings indicate an abstract time representations across different sensory modalities.

Hide abstract

 


P2.52 Haptic-visual interactions for stiffness perception in the human cerebral cortex studied with an fMRI-compatible pinch device

Liu J., Callan A., Wada A. & Ando A.
Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Japan

Show abstract

Haptic-visual interactions for the perception of object properties such as size, shape, and stiffness have been extensively studied in behavioral experiments applying virtual reality systems. However, the study of these interactions with fMRI is severely limited by the fact that most of conventional electromagnetic devices are disturbed by the magnetic field and affect the imaging quality. To overcome this problem, we developed an fMRI-compatible device using an ultrasonic motor and optical sensors to simulate the sensation of pinching objects. This study investigated neural substrates of multimodal stiffness perception utilizing the pinch device. Although stiffness perception intrinsically relies on haptic information, stiffness is inherently an integrated property of displacement and force. Psychophysical literatures have reported that perceived multimodal estimate of stiffness is consistently influenced by visual feedback and suggested that each modality combines cues to arrive at an estimate of stiffness and then these estimates are integrated into a multimodal value. In our experiments, participants pinched virtual objects of different stiffness levels and received visual feedbacks either congruent with or faster/slower than their finger movements. By examining cortical regions that were more activated by haptic-visual information compared with unimodal conditions in a one-back stiffness comparison task, we could identify regions in inferior parietal lobule including supramarginal gyrus and angular gyrus that were different from the regions (contralateral postcentral gyrus, bilateral parietal operculum, visual cortex) reported in previous researches for haptic-only stiffness perception and visual tasks, which may be candidates for haptic-visual interaction in stiffness perception.

Acknowledgments: This research is partially supported by JST Research Complex Promotion Program.

Hide abstract

 


P2.53 Apparent increase in lips size improves tactile discrimination

Ambron E.A., Medina J.M., Coyle M.C. & Coslett, H.B.C.
University of Pennsylvania

Show abstract

Magnifying the vision of one’s body part improves tactile acuity. We explored the effect on tactile acuity of an apparent increase in size of a body part induced by an anesthetic cream. Application of an anaesthetic cream (benzocaine) to the lips caused many subjects to perceive their lips as larger, while this enlargement was not perceived with the application of simple moisturizing cream. Tactile discrimination as judged by two-point discrimination improved as function of the degree of increase in perceived lips size with the anesthetic cream, while this effect was not observed with moisturizing cream. These data demonstrate that a subjectively experienced increase in the body part size enhances tactile discrimination of the body part. These data are consistent with the hypothesis that magnification effects are mediated by a malleable, experience dependent representation of the human body that we have termed the body form representation.

Hide abstract

 


P2.54 Differential Importance of Visual and Haptic Information in Postural Control among Different Standing Postures

Cheung, T.C.K., Bhati, P., Jenish, C. & Schmuckler, M.A.
Department of Psychology, University of Toronto Scarborough

Show abstract

Maintaining balance requires multisensory inputs. Previous work has demonstrated fundamental roles for both visual and haptic inputs in postural control, with increased postural stability in lit versus dark environments, and increased postural stability when observers receive light fingertip contact information, even when this contact is not weight supporting. Interestingly, little work has examined the role of such visual and haptic inputs in standing postures varying in the length and width of the base of support. A pair of experiments examined adults’ postural stability under conditions systematically combining the presence versus absence of visual input, the presence versus absence of haptic input, and four different standing postures – natural standing posture (feet shoulder width apart), feet together, tandem stance (toe of the back foot touching the heel of the front foot), and Chaplin stance (heels together, feet approximately 90 degree angle). These experiments were distinguished by the type of haptic input provided, with Experiment 1 providing stable light fingertip support, and Experiment 2 providing unstable light fingertip support. Under conditions of stable haptic input, measures of postural stability demonstrated that when the stances became increasingly unstable (e.g., decreased based of support), visual and haptic inputs became increasing salient, as indicated by interaction effects. When haptic input became unstable, however, the benefits typically observed for haptic inputs disappeared, although the benefit of visual input was retained. Thus, although both visual and stable haptic information increasingly facilitated stability in standing postures ranging from stable to unstable, with unstable haptic information the visual input dominated the regulation of stance across standing postures. These results potentially inform rehabilitation and risk prevention for people with high fall risk and motor disorders.

Hide abstract

 


P2.55 Multisensory benefits and multisensory interactions are not equivalent: A comparative, model-based approach

Innes, B.R. & Otto, T. U.
University of St. Andrews

Show abstract

Multisensory signals allow for faster response times (RTs) than the unisensory components. While this redundant signals effect (RSE) has been widely studied with diverse signals, no modelling framework has explored the RSE across studies systematically. To enable a comparative approach, here, we propose three steps: The first quantifies the size of the RSE compared to parameter-free race model predictions. The second quantifies processing interactions beyond the race mechanism: history effects and so-called violations of Miller’s bound. The third models the RSE on the level of RT distributions by adding two free model parameters: a correlation parameter covers history effects; additional noise in multisensory conditions accounts for violations off Miller’s bound. Mimicking the diversity of studies in a 2×2 design, we then tested different audio-visual signals that target the two processing interactions. The first factor Stimulus Construction had levels ‘simple’ (i.e. non-random, with strong transient signal onsets) and ‘complex’ (i.e. randomly-generated, with weak transient signal onsets). The second factor Signal Features had levels ‘consistent’ (i.e. only one signal variant per modality) and ‘alternating’ (i.e. two signal variants per modality). We show that the parameter-free race model provides overall a strong predictor of the RSE across factors. Regarding the additional interactions, we found that history effects, and the associated correlation parameter, do not depend on the repetition of low-level signal features. Furthermore, larger violations of Miller’s bound, and consequently the associated additional noise, seem to be linked to transient signal onsets. Critically, this latter parameter dissociates from the size of the RSE, which demonstrates that multisensory interactions and multisensory benefits are not equivalent. Overall, we argue that our approach, as a blueprint, provides both a general framework and the precision needed to understand the RSE across sensory modalities and participant groups.

Hide abstract

 


P2.56 Leveraging multisensory neurons and circuits in assessing theories of consciousness

Noel, J.P., Ishizawa, Y., Patel, S.R., Brown, E.N., Eskandar, E.N. & Wallace, M.T.
Vanderbilt University

Show abstract

Detailing the neural mechanisms enabling wakefulness and conscious experience is a central and unanswered question in systems neuroscience despite its paramount clinical implications in a host of disorders of consciousness. Predicated on two of the frontrunner theories of consciousness, the information integration theory (IIT) and global neuronal workspace (GNW) theory, we generate a number of concrete neurophysiological predictions and test these predictions with a neuronal dataset collected from macaques. According to the IIT and its “consciousness-meter” (phi, Φ), in transitions between conscious and unconscious states, neurons that actively integrate information (AND gates), as opposed to those that simply converge information (XOR gates), should be most readily impacted. Conversely, when an organism is aware, neurons that integrate should exhibit properties of consciousness to a greater degree than neurons that converge information. We tested these predictions by recording single unit activity in primary somatosensory (S1) and ventral pre-motor (vPM) areas in non-human primates that are administered audio-tactile (AT), tactile (T), and audio (A) stimuli and in which states of consciousness are modulated via propofol anesthesia. Responding to either A or T stimulation was considered to represent an XOR gate, while being activated to a greater extent by the co-occurrence of A and T (i.e., AT) stimulation than to each stimulus alone (i.e., multisensory enhancement) was considered to represent an AND gate. Contrary to the IIT prediction, when animals are rendered unconscious a greater degree to convergent neurons (XOR gates) stop converging than integrative neurons (AND gates) stop integrating. Furthermore, measures of neural complexity and noise correlations more faithfully track the animals’ consciousness state for convergent neurons when compared with integrative neurons. On the other hand, according to the GNW theory, conscious percepts should result in sustained neural activity and in greater single trial co-activation of S1 and vPM than under unconscious conditions. Both of these predictions are supported in the neurophysiological data. Collectively, these results provide more empirical support for the GNW, as compared with the IIT, theory of consciousness.

Hide abstract

 


P2.57 A SIMPLE LAW THAT GOVERNS MOST MULTISENSORY AMPLIFICATIONS AND ENHANCEMENTS

Billock, V.A. & Havig, P.R.
College of Optometry, Ohio State University

Show abstract

Under a vast variety of conditions, the presence of one sensory signal can enhance or amplify another (Stanford & Stein, 2007). Usually weak signals are amplified more than strong (the Principle of Inverse Effectiveness), but there has been little attempt (to our knowledge) to quantify the lawful nature of the amplification. We find that (with one important exception) the amplified response is a power law of the unamplified response, with a compressive exponent that accounts for the general finding of inverse effectiveness; i.e., AmplifiedResponse = a*UnamplifiedResponse^n. This simple power law amplification accounts for both human psychophysical data and animal electrophysiology. It accounts for spike rate data in cortical subthreshold multisensory cells, and mass action cortical current source densities and multiunit activity. It accounts for both amplification between senses (visual modulated by auditory, auditory modulated by tactile) and amplification within a sense (in this case human color vision). The r^2 values for these power law fits are so high that these enhancements can all be considered gated-amplifications rather than nonlinear combinations. The sole but important exception is for overtly multi-modal neurons, especially in superior colliculus. The spike rate enhancements in animals and the psychophysical enhancements in humans have slightly compressive exponents (circa 0.85). Some other enhancements show greater compression (more inverse effectiveness). The similarity of psychophysical enhancement and spike rate enhancement in anesthetized animals argue against attention having a role in psychophysical multi-sensory enhancement. A neural model grounded in sensory binding theory closely approximates the power law behavior seen psychophysically and electrophysiologically.

Acknowledgments: Supported by NSF 1456650 and an ORISE/AFRL Faculty Fellowship to V. Billock.

Hide abstract

 


P2.58 A perspective on two potential mechanisms underlying different modes of multisensory integration

Nidiffer, A.R., Ramachandran, R. & Wallace, M.T.
Vanderbilt University

Show abstract

The integration of signals across modalities has been previously described based on their spatial and temporal proximity. In short, unisensory signals that occur in close spatial and temporal proximity tend to produce response enhancements whereas stimuli falling outside a spatial or temporal window are not integrated or result in response depressions. Spatial and temporal proximity has been demonstrated to be an important factor in multisensory responses as measured in behavior, electrophysiology, and imaging. However, under certain conditions, these principles fail to account for multisensory interactions. Recent behavioral findings have related some multisensory behaviors to another feature, one based on the similarity of the temporal structure between unisensory cues. This cue, temporal correlation, serves as a strong predictor of whether two unisensory cues (for example, mouth movements and vocal intensity) originate from a common source (a speaker), i.e., multisensory binding. Binding has been shown to be somewhat resistant to spatial and temporal disparities. Here, we hypothesize that multisensory interactions occur dependent on at least two separate mechanisms: one based on the proximity of sensory cues in the environment and another based on the similarity in temporal structure. We present a reanalysis of data from a recent experiment suggesting that these two modes can be measurably dissociated. Further, we hypothesize a potential developmental link between the mechanisms. We go on to propose a set of experiments to test these hypotheses.

Hide abstract

 


P2.59 An analysis and modelling toolbox to study multisensory response times

Otto, T.U.
University of St Andrews

Show abstract

Responses to redundant signals from different sensory modalities are typically faster than responses to the uni-sensory components. This redundant signals effect (RSE) has been extensively studied not only with an impressive variety of signals, covering all five classic senses, but also with different subject populations focusing on development, aging, and clinical samples. Remarkably, despite intensive research, a standardized methodology to systematically analyse and interpret the RSE is still not consistently developed. Moreover, the most obvious modelling approach to explain the effect, the so-called race model championed by Raab (1962), is typically not fully appreciated in its explanatory power. To facilitate a comparative approach across studies, here, we present a toolbox, implemented in MATLAB, which includes a wide range of functions that allow (1) to simulate the RSE, (2) to perform standardized operations of basic RT analysis, (3) to precisely measure and analyse the RSE on the level of response time distributions, and (4) to fit the most recent race model, as proposed by Otto and Mamassian (2012), using maximum likelihood estimation. The presentation of the model functions is accompanied by parameter recovery simulations to validate the fitting procedures. One critical finding is that parameter recovery with reaction time distributions averaged across subjects can be biased, which consequently should be avoided when studying the RSE. The use of the toolbox is illustrated by example code and all functions are supported by help-documentation.

References

Raab (1962). Statistical Facilitation of Simple Reaction Times. Transactions of the New York Academy of Sciences, 24(5), 574-590.

Otto, & Mamassian (2012). Noise and correlations in parallel perceptual decision making. Current Biology, 22(15), 1391-1396.

Acknowledgments: This work was supported by Biotechnology and Biological Sciences Research Council (BB/N010108/1).

Hide abstract

 


P2.60 A neurocomputational model of synapse maturation explains Bayesian estimate and causal inference in a multisensory environment

Cuppini, C., Magosso, E. & Ursino, M.
University of Bologna

Show abstract

Experimental and theoretical studies suggest that the brain integrates information from different sensory modalities following Bayesian rules, to generate an accurate percept of external events. Despite the empirical evidence, neural mechanisms responsible for this behavior are still insufficiently understood.

The aim of this work is to summarize the main aspects of a neurocomputational model realized recently, based on physiologically plausible hypotheses. The model produces estimates of external events in agreement with Bayesian rules, and suggests architectural and neuronal mechanisms responsible for such abilities. Additionally, it can be used to investigate how a multisensory environment can affect the maturation of multisensory integrative abilities.

The model presents a hierarchical structure: two unisensory layers (auditory and visual) receive the corresponding sensory input through plastic receptive field synapses. These regions are topologically organized and reciprocally connected via excitatory synapses, which encode the spatial and temporal co-occurrence of visual-auditory inputs. Based on sensory experience, synapses are trained, by means of Hebbian learning rules. Finally, these unisensory regions send excitatory connections to a third multisensory layer, responsible for the solution of the causal inference problem.

Simulations show that, after a multisensory training, the receptive fields shrink to reproduce the accuracy of perceived inputs, realizing the likelihood estimate of unisensory spatial position. Moreover, information on prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses, realizing a Bayesian estimate in multisensory conditions. This model has been tested in a variety of stimulus conditions comparing its results with behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the effect of the audio-visual spatial disparity on the percept of unity, the dependence of the auditory error on the causal inference. Finally, model results suggest that the probability matching is the perceptual strategy used in auditory-visual spatial localization tasks.

Hide abstract

 


P2.61 Dynamic decoding of unisensory and multisensory stimulus processing in conscious and unconscious primate cortex

Tovar, D.A., Noel, J.P., Ishizawa, Y., Patel, S.R., Brown, E.N., Eskandar, E.N. & Wallace, M.T.
Vanderbilt Brain Institute, Vanderbilt University, Nashville, USA

Show abstract

While awake, given a sufficiently salient stimulus, we are seamlessly able to identify the presence and characteristics of a stimulus. However, in an unconscious state, we are no longer able to identify the presence nor the characteristics of the formerly salient stimulus. Although this process takes place every night when we go to sleep, much is unknown about how the brain processes stimuli in lower and higher order cortical areas across conscious states. In this study, we used time-resolved neural decoding to analyze neural activity collected with an electrode microarray in primary somatosensory (S1) and ventral premotor (vPM) areas. Primates were presented with auditory, tactile, and audiotactile stimuli while awake and unconscious. The primary goals of the analysis were to find decoding differences across states of consciousness for 1) the presence or absence of stimuli and 2) differentiating the modality of the stimulus (auditory, tactile, and audiotactile). In general, our results show above chance decoding for both conscious and unconscious states. However, the conscious state showed higher decoding for both stimulus detection and modality in both S1 and vPM, as well as longer, sustained, above chance decoding as compared to the unconscious state. This finding agrees with previous work that shows longer sustained neural activity for conscious percepts. Interestingly, the decoding difference between conscious and unconscious states was significantly greater (Wilcoxon signed-rank test, p < 0.001) for stimulus detection than it was for modality in both S1 and vPM. This finding suggests that conscious awareness confers the greatest advantage for detecting the presence of stimuli in the brain, while secondarily enhancing our ability to the detect the characteristics of the stimuli. Further analysis will use generalization decoding techniques to investigate common neural substrates across time and conscious states.

Hide abstract

 


P2.62 Therapeutic applications: Dance in the treatment of neurodegenerative and chronic disorders

Barnstaple, R., Fontenasi, C. & DeSouza, J.
York University

Show abstract

Dance, an intensively multimodal activity, engages both top-down and bottom-up brain processes (Bar & DeSouza, 2016), providing a rich source of material for researchers interested in the integration of movement and cognition (Bläsing et al, 2012). Simultaneously involving memory, visual-spatial awareness, kinesthetic and vestibular information, motor imagery, touch, imagination, timing, sound and musical/social elements, dance practices challenge the central nervous system (CNS) in novel and stimulating ways (Dhami, Calvo-Moreno & DeSouza, 2015). Fostering integration of multisensory stimuli and coordinating both inter/intra personal responses, movement to music nurtures plasticity, stimulates development, and promotes neurorehabilitation across the lifespan. Dance interventions for neurodegenerative conditions such as Parkinson’s disease (PD) and Alzheimer’s disease (AD), as well as other conditions including multiple sclerosis (MS), chronic pain (CP), and mental health, have been the subject of numerous studies over the last decade, with observed benefits ranging from physical (balance, gait, diminution of motor symptoms) to cognitive (improved performance on memory and concentration tasks) and emotional improvements (mood scores and self-efficacy. Multisensory integration provides a useful framework through which to understand the effects of dance while offering theoretical models that may explain the mechanisms by which improvements are accomplished. We present initial data with two different populations participating in dance therapy – PD and CP – that demonstrates both how the benefits of dance therapy may be related to improvements in multisensory integration, while modelling how tools related to multisensory integration may be well-adapted to measure the effects of dance interventions.

Hide abstract

 


P2.63 Visual and auditory cueing of learnt dance choreography in expert dancers and people with Parkinson’s disease (PD).

DeSouza, J.F.X.
Centre for Vision Research, York University

Show abstract

At IMRF 2013, we presented analysis on our project examining the neural networks involved in learning a new ballet to a novel piece of music over 8 months with a focus on auditory cortex (DeSouza & Bar 2012; DOI: 10.1163/187847612X646677). We scanned subjects (expert dancers and people with PD) up to four times using fMRI. To date, we have now scanned 18 professional dancers from the National Ballet of Canada, 12 controls and 10 people with PD. All subjects visualized dancing to a one-minute piece of music during an 8-minute fMRI scan. Subjects were asked to visualize dancing their part while listening to their specific music. For more details of the training and performances for the first of 4 cohorts (Bar & DeSouza, 2016) DOI: 10.1371/journal.pone.0147731. Preliminary results revealed a significant increase of BOLD signal, across the sessions in a network of brain regions including bilateral auditory cortex and supplementary motor cortex (SMA) over the first three imaging sessions, but a reduction in the fourth session at 8-months. This reduction in activity was not observed in basal ganglia. Does this learning curve with increase and decrease in BOLD signal when cued by auditory or visual and auditory stimuli? Our results suggest that as we learn a complex motor sequence in time to music, neuronal activity increases until performance and then decreases by 34-weeks, possibly a result of overlearning and habit formation. Our findings may also highlight the unique role of basal ganglia regions in the learning of motor sequences. We now aim to use these functional regions of activation as seed regions to explore structural (DTI) and functional connectivity analysis.

Acknowledgments: NSERC Discovery grant

Hide abstract

 


P2.64 A new approach to compare the quality of allocentric and egocentric spatial navigation

Bock, O. & Fricke, M.
German Sport University Cologne

Show abstract

Navigation through buildings and towns can rely on an egocentric or on an allocentric representation (i.e., associating viewed landmarks with changes of direction, versus using a topographically organized “mental map”). These two representations are thought to reside in different brain areas, and to be differently vulnerable by aging. Available tests of spatial navigation either don’t discriminate between the two representations, or they assess only the preference for one of the two representations, or they assess the quality only of allocentric navigation. No tests are available yet to assess the quality of egocentric navigation.

Hide abstract

 


P2.65 The Influence of Dance for Young Adults with Disabilities

Andrew, R.-A., Reinders, N.J., & DeSouza, J.F.X.
York University

Show abstract

The benefits of dance have become a popular subject of research worldwide as unlikely demographics, such as people with Parkinson’s disease, are dancing to improve their health. Dance is a multisensory activity that incorporates physical exercise, creativity, and spatial-temporal skills; these components are ideal for improving neural connectivity and enhance brain plasticity. Regular participation in dance classes has been shown to have a positive effect on participants’ mental health. Our study examined putative benefits on 8 disabled young adults, 3 males and 5 females, median age of 22.5 yrs.), who participated in 10 1-hour dance classes over 5 weeks in a dance class developed specifically for people with disabilities. Although there are some community dance classes that are inclusive of disabled people, this class combined features of community dance with the sequential, skills-based classes available to able-bodied youth. We used interviews, videos, and questionnaires to assess participants’ mood. In addition, we used electro-encephalography (EEG) to examine individual participants’ changes in resting state (rsEEG) before and after a dance class. Asymmetry in alpha band frequency (8-13Hz) has been associated with negative affect and increased wave function in any single frequency band can improve functional connectivity across all networks (Gordon, Palmer & Cooper, 2010; Cruz-Garza et al., 2014). We used the Emotiv Epoch wireless headset which is a quick, non-invasive method of neuroimaging that is also portable. The dance classes were designed and implemented following the principles of disability rights models; medical diagnoses were not disclosed. In this study, ‘disabled’ may refer to physical, developmental, cognitive, or mental health impairments. We expect the results of this study will contribute to the existing research into the benefits of dance and provide a clear understanding of the impact dance may have on the lives of disabled people.

Hide abstract

 


P2.66 Distance perception of an object that moves with you

Kim, J. J. & Harris, H. R.
York University

Show abstract

The perceived distance to objects in the environment needs to be updated during self-motion. Such updating needs to be overridden if the object moves with the observer (such as when reading a phone while walking). Errors in updating could lead to errors in perceived distance and, because of size/distance invariance, to errors in perceived size. To look for such errors, we measured the perceived size of an object that moved with the observer during visually simulated self-motion.

Participants judged whether a vertical rod presented on the ground plane in a virtual-reality-simulated scene at fixed distances of 2-10m, appeared longer or shorter than a physical reference rod (45cm) that they held in their hands either vertically or horizontally. Observers were either stationary or in the presence of optic flow compatible with moving at 1m/s or 10m/s forwards or backwards. Viewing was either monoscopic or stereoscopic. The length of the visual rod was varied by an adaptive staircase and responses were fitted with a logistic function to determine the PSE.

The rod generally needed to be longer than the physical rod to be judged as equal to its size. Errors were smaller when viewing monoscopically compared to stereoscopically (+16%). The orientation of the reference rod influenced size judgements, with larger errors when the rod was held horizontally (+16%) compare to when it was held vertically (+6%). However, there were no significant differences observed in the errors in perceived rod size due to optic flow.

We interpret the errors in the perceived size as resulting from an error in perceived distance. Thus, we confirm the well-known observation that perceived distances are compressed in a virtual environment. However, this compression effect disappeared with monoscopic viewing. Our ability to update the distance of an object moving with us appears robust during forward and backward self-motion.

Hide abstract

 


P2.67 The sound of us walking together in time and space: Exploring how temporal coupling affects represented body size, peripersonal and interpersonal space in group interactions

Fairhurst M.F.*, Tajadura-Jiménez, A.*, Keller, P.E. & Deroy, O.
* Shared 1st authorship

Universidad Carlos III de Madrid & University College London

Show abstract

Coordinating our actions in time and space with others has been suggested to act as a social glue, holding interacting groups together. Whether as part of a marching band or simply walking down a sidewalk, we regularly hear and integrate the sounds and proprioceptive information of our footsteps along with those sounds of others. This often leads to a sense of being part of a group and of personal enlargement. In this study, participants marched in synchrony with the sound of a metronome, while listening to footstep sounds of 8 confederates walking around them. In a 2×2 factorial design, we manipulated the footstep sounds of the group, varying temporal synchronicity (synchronous or asynchronous with the metronome) and congruency (same versus different footwear to the participant). This changed how similar in timing and quality the participant footsteps were relative to the others. We measured temporal coordination and subsequent changes in feelings about self and others, represented body size, peripersonal space and comfort interpersonal distance. Beyond merely tracking interpersonal affiliation, our results show a main effect of synchronicity on peripersonal space, with larger distances in the asynchronous conditions, while for interpersonal distance an interaction between synchronicity and congruency is observed, with the smallest distance in the same footwear, synchronous condition. Synchronicity with the group had positive effects in reports of body strength and elongation, emotional valence and dominance, and feelings of affiliation. We will discuss these results and their correlations with gait changes related to sensorimotor synchronization and represented body weight. We suggest that when part of a larger group, we feel a smaller part of the whole thus affecting our action space accordingly. Further, the more similar we sound as a group, the closer we feel to others.

Acknowledgments: AT was supported by RYC-2014–15421 and PSI2016-79004-R (“MAGIC SHOES: Changing sedentary lifestyles by altering mental body-representation using sensory feedback”; AEI/FEDER, UE), Ministerio de Economía, Industria y Competitividad of Spain. OD and MF were supported by the AHRC RTS (“Rethinking the senses”) grant AH/L007053/1.

Hide abstract

 


P2.68 Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

Wahn, B. & Kӧnig, P.
University of British Columbia

Show abstract

Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select only a part of the sensory input for further processing at the expense of neglecting other sensory input. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for the sensory modalities or whether attentional resources are shared across the sensory modalities. Recent studies have suggested that attentional resource allocation across the sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves feature-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). Here, we present a line of experiments (Wahn & Kӧnig, 2015a,b; Wahn, Schwandt, Krüger, Crafa, Nunnendorf, & Kӧnig, 2015; Wahn & Kӧnig, 2016; Wahn, Murali, Sinnett, & Kӧnig, 2017) and a review of the literature (Wahn & Kӧnig, 2017) supporting this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform feature-based attention tasks, whereas for the visual and tactile sensory modalities, partly shared resources are recruited. If feature-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform a feature-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Overall, these findings suggest that the attentional system flexibly allocates attentional resources depending on task demands (i.e., whether spatial and/or feature-based attention is required) and the involved sensory modalities. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain’s costly resource expenditures and simultaneously maximizes capability to process currently relevant information.

Acknowledgments: We acknowledge the support of a postdoc fellowship of the German Academic Exchange Service (DAAD) awarded to BW and the support by H2020 – H2020-FETPROACT-2014 641321 – socSMCs for PK.

Hide abstract

 


P2.69 Visual-Inertial interactions in the perception of translational motion

de Winkel, K.N. & Bülthoff, H.H.
Max Planck Institute for Biological Cybernetics

Show abstract

Recent work indicates that the central nervous system forms multisensory perceptions differently depending on inferred signal causality. In accordance with these findings, we hypothesize that multisensory perception of traveled distance in the horizontal plane conforms to such Causal Inference (CI).

Participants (n=13) were seated in the Max Planck Cablerobot Simulator, and shown a photo-realistic rendering of the simulator hall via a Head-Mounted Display. Using this setup, they were presented various unisensory and (incongruent) multisensory visual-inertial horizontal linear surge motions, differing only in amplitude (i.e., distance). Participants performed both a Magnitude Estimation and a Two-Interval Forced Choice task. We modeled the responses in the tasks according to a CI model, as well as competing models (Cue Capture, Forced Fusion), and compared the models based on their fits.

The data indicate that distance is somewhat underestimated for both the visual and inertial unisensory channels, and that differential thresholds increase with physical distance -in accordance with a Weber’s law. Preliminary findings on model comparisons favor different models in different individuals, with an overall preference for the CI model. However, the data also suggest that different priors may be needed to account for differences between the tasks.

Hide abstract

 


P2.70 Listening to a conversation with aggressive content expands the interpersonal space

Vagnoni, E., Lewis, J., Tajadura-Jiménez, A. & Cardini, F.
Anglia Ruskin University

Show abstract

The distance individuals maintain between themselves and others can be defined as ‘interpersonal space’. This distance can be modulated both by situational factors and individual characteristics. Here we investigated the influence that the interpretation of other people interaction, in which one is not directly involved, may have on a person’s interpersonal space. In the current study we measured, for the first time, whether the size of interpersonal space changes after listening to other people conversations with neutral or aggressive content. The results showed that the interpersonal space expands after listening to a conversation with aggressive content relative to a conversation with a neutral content. This finding suggests that participants tend to distance themselves from an aggressive confrontation even if they are not involved in it. These results are in line with the view of the interpersonal space as a safety zone surrounding one’s body.

Hide abstract

 


P2.71 Social modulation of audiotactile integration near the body

Hobeika L., Taffou M. & Viaud-Delmon, I.
IRCAM, Sorbonne Université, Laboratoire STMS

Show abstract

Peri-personal space (PPS), the space immediately surrounding our bodies, rules the multisensory integration boost of stimuli. As a space of interaction with the external world, PPS is involved in the control of motor action as well as in the protection of the body. Its boundaries are flexible but little is known about their modulation by the presence or interaction with other individuals. We investigated whether PPS boundaries are modulated in the presence of an inactive individual, and when participants are performing a task in collaboration or in competition with a partner. We used a modified version of Canzoneri et al. (2012) audiotactile interaction task in three groups of right-handed participants. In each group, participants performed the task both in isolation and with another participant, inactive (audience group) or doing the task as well in collaboration (collaborative group) or in competition (competitive group). They had to detect as fast as possible a tactile stimulus administered on their hand, while task-irrelevant sounds were presented, looming from the right and left participants front hemifields. The sound stimuli were processed through binaural rendering. Tactile stimuli were processed when the sound was perceived at varying distances from participant’s body. Mean reaction times to the tactile stimuli at the different perceived sound distances were compared and used to estimate PPS boundaries. PPS boundaries were modulated only when participants acted in collaboration with a partner, in the form of an extension on the right hemispace, and independently of the location of the partner. This suggests that space processing is modified during tasks performed in collaboration, and questions the notion of motor space during group actions.

Hide abstract

 


P2.72 Modulation of Self-recognition by Interpersonal Synchronization

Hao, Q., Ora, H., Ogawa, K., Amano, S. & Miyake, Y.
School of Computing, Tokyo Institute of Technology

Show abstract

Self-recognition including sense of agency and ownership still remains a big mystery, which attracts many interests of psychological researchers using rubber hand illusion to investigate the self-recognition. However, few studies have investigated the self-recognition during interpersonal interaction such as face to face interaction in our daily life. Such face to face interaction includes two types of movements, mirror or non-mirror symmetric movements. For example, previous studies investigated the synchronous movement of a human’s hand and a rubber hand, which were arranged to face each other. These studies reported weak sense of agency, but not ownership, when the participants viewed the movement of a right or left rubber hand that was synchronized with their right hands’ movement. Although one previous study reported that both of sense of agency and ownership were elicited in interpersonal synchronization, it focused on the condition of mirror symmetry movements only. That is, the participants’ right or left hand fist-clenching movement was synchronously imitated by a face-to-face sat experimenter’s left or right hand movement, respectively. Hence, the present study designed two conditions corresponding to mirror and non-mirror symmetry movements, thereby investigating the self-recognition (sense of agency and ownership) in interpersonal synchronization. In the mirror symmetry condition, the participants moved their right or left hand and saw the synchronous movement of a face-to-face sat experimenter’s left or right hand, respectively, while the non-mirror symmetry condition made the experimenter’s right or left hand synchronized with the participants’ right or left hand, respectively. The results showed that sense of agency and ownership were significantly elicited in both the mirror and non-mirror symmetry conditions, in which the proprioceptive drift, as reported not to go hand in hand with the ownership, didn’t show any difference. This suggests that self-recognition including sense of agency and ownership could be modulated by interpersonal synchronization.

Hide abstract

 


P2.73 The use of egocentric and gravicentric cues to perceived vertical in the absence of tactile cues

Bury, N., Harris, L.R. & Bock, O.
Centre for Vision Research, York University, Toronto, Canada

Show abstract

Human spatial orientation can be anchored in three reference frames – gravicentric (pull of gravity), allocentric (alignment of familiar visual objects) and egocentric (long body axis). In the absence of gravity, the vertical tends to be mostly aligned with the egocentric reference frame (Jenkin et al., 2005). However, it is still uncertain how vertical is determined when tactile cues to the direction of gravity, normally obtained from the support surface are absent.

Thirty-five participants were tested on ground and underwater under neutral bouyancy. On the ground, they were strapped on a padded plate; underwater, they wore a buoyancy control device, which was attached to the plate. Participants were positioned in four body postures between 0° (upright) and -135° (pitched head-down). In all conditions, vision was constrained to a circular view in which they saw a monoscopic image of a tree. Using a joystick, they adjusted it in the pitch axis such that “leaves are at the top and roots are at the bottom”. The experimenter avoided any definition of “up” or “down”.

On ground, 62.9% of participants were dominated by the egocentric reference frame, and 37.1% by the gravicentric frame; underwater, 65.7% of participants relied on the egocentric frame and 34.3% on gravicentric frame. 91.4% of participants were consistent in their preferred reference frame on ground and underwater.

We conclude that the weighting given to the various sensory cues that contribute to the perception of vertical is highly individual and that those that were dominated by gravity in the presence of tactile cues continued to be dominated by gravity in their absence. The vestibular system alone (without tactile cues) is sufficient for detecting the gravicentric reference frame.

Acknowledgments: This work was supported by the Space Administration department of the National Aeronautics and Space Research Centre of the Federal Republic of Germany (DLR) to Prof. Otmar Bock, with funds made available by the German Federal Ministry for Economic Affairs and Energy, based on a resolution of the German Federal Parliament (award code 50WB0726). NB holds a post-doc fellowship from the VISTA program at York University.

Hide abstract

 


P2.74 Auditory roughness impacts the coding of peri-personal space

Taffou, M., Suied, C. & Viaud-Delmon, I.
Institut de Recherche Biomédicale des Armées

Show abstract

Peri-personal space (PPS) is defined as the space immediately surrounding our bodies, which is critical for our interactions with the external world. This space near the body, which is coded by a dedicated network of multisensory neurons, is thought to play a role in the protection of the body. The boundaries of PPS are known to be flexible and to be modulated by the presence of threatening elements in the environment.

Recently, it has been evidenced that alarming sounds such as human screams comprise an acoustic attribute (amplitude modulation in the 30–150Hz range), which corresponds to the perception of roughness. Roughness seems to be linked to a more intense induction of fear, behavioral gains and a higher activation in cerebral areas involved in fear and danger processing. Therefore, the presence of roughness might confer to sounds an emotional quality that should be sufficient to impact the multisensory coding of PPS.

In the present study, we explored whether auditory-tactile integration could be modified by the auditory attribute of roughness. We used a modified version of Canzoneri et al. (2012) paradigm to study peri-trunk PPS in healthy participants, comparing two meaningless looming sounds: a simple harmonic sound (f0=500Hz) and a rough sound (the same harmonic sound amplitude-modulated at 70Hz). The sounds were processed through binaural rendering so that the virtual sound sources were looming towards participants from the left part of their rear hemi-field.

We found that participants’ PPS was larger in the presence of the rough sound than of the non-rough sound. These results suggest that PPS is sensitive to roughness, even expressed in a very simple way (simple harmonic sounds and not human screams or natural sounds), confirming that roughness could be an auditory attribute efficiently conveying a signal of danger to the central nervous system.

Hide abstract

 


P2.75 Neural signatures of processing noun phrases and contextual plausibility

Xia, A., Barbu, R., Singh, R., Toivonen, I. & Van Benthem, K.
Institute of Cognitive Science, Carleton University

Show abstract

Sentence comprehension involves introducing, storing, and retrieving discourse information. Indefinite noun phrases serve to introduce new discourse referents, whereas definite noun phrases are often anaphoric, triggering an internal mechanism of searching for old referents that are presumably part of the common ground (e.g., Heim, 1982). However, a definite may be used to introduce a new referent by appealing to presupposition accommodation: the process of amending the context to entail a required presupposition (Lewis, 1979; von Fintel, 2008). Thus, new information may be conveyed through assertions or presuppositions, raising the challenge of distinguishing the two as different kinds of discourse update (e.g., Gazdar, 1979).

Our study extends results from an incremental stops-making-sense task (Singh et al., 2016) by using electroencephalogram (EEG) to investigate brain activity during the processing of assertions and presuppositions in both plausible and implausible contexts. We expect part of our findings to corroborate EEG results from a similar German-language study by Burkhardt (2006), in which an N400 was elicited in implausible contexts. However, Burkhardt (2006) did not include indefinite phrases. By controlling for the addition of a discourse referent, any differences in neural signatures can be more precisely attributed to differences between definites and indefinites, and the corresponding processes involved in their interpretation.

We use abridged versions of all 128 pairs of sentences from Singh et al. (2016) as our stimuli. Preliminary data analysis from 20 participants show a main effect of definiteness, with negative deflection in the 350-400 ms time window. An effect of plausibility was also found, with a negative deflection around 400 ms and positive deflection around 600 ms in the left central and posterior regions, reminiscent of the N400/P600 complex. However, the effect of definiteness appeared stronger, suggesting that employment of presupposition accommodation.

Hide abstract

 


P2.76 Factors influencing the uptake of co-speech gestures in real-time language processing

Saryazdi, R. & Chambers, C. G.
University of Toronto

Show abstract

Spoken language processing is known to be influenced by concurrent visual information. For example, the hand gestures produced spontaneously by talkers have been shown to facilitate listeners’ auditory comprehension. The present study examines how listeners’ basic ability to attend to and use these manual gesture cues is influenced by perceptual factors, such as the nature of visual cues (small vs. large gesture) and the quality of auditory input (presence vs. absence of background noise). In two experiments, listeners watched recordings of a speaker providing instructions regarding various objects in the visual environment (“Pick up the candy”). Critical trials varied whether the speaker produced a simultaneous gesture reflecting the grasp posture used to pick up the target object. Listeners’ gaze position was recorded to capture the relative ease with which the target was identified as language unfolded in real time. In Experiment 1, although listeners rarely fixated directly on the co-speech gestures, peripheral uptake of visual information speeded target identification as the sounds in the unfolding noun were heard, compared to when speech occurred without gestures. However, the benefit was mostly observed when target items were small objects. This may be because the correspondingly smaller and hence more precise-looking gestures can more effectively differentiate targets from other objects than when the target is a large object among smaller ones. In Experiment 2, background noise was added to examine whether degrading the quality of speech would increase listeners’ reliance on gesture cues in a compensatory manner. Interestingly, background noise reduced listeners’ use of gesture information, possibly because the increased demands on auditory processing limited the resources available for attending to or integrating visual information. Together, the results expand our understanding of how situational factors influence the degree to which visual and auditory signals are coordinated in the course of natural communicative interactions.

Hide abstract

 


P2.77 Indexing Multisensory Integration of Natural Speech using Canonical Correlation Analysis

O’Sullivan, A.E., Crosse, M.J., Di Liberto, G.M. & Lalor, E.C.
Trinity College Dublin and University of Rochester, NY

Show abstract

Speech is a central part of our lives, and is most commonly perceived as multisensory. Indeed, integrating the auditory and visual information from a talker’s face is known to benefit speech comprehension, particularly when the auditory information is degraded. However, the neural mechanisms underlying this integration are still not well understood, especially in the context of natural, continuous speech.

Recent work employing EEG to study the encoding of natural speech has indexed the benefit of congruent visual speech on the entrainment of the acoustic envelope (Crosse et al., 2015). Furthermore, they found this effect to be enhanced in challenging listening conditions, in line with the principle of inverse effectiveness (Crosse et al., 2016). This approach, however, is limited in its ability to deal with more complex representations of the speech signal. This is unfortunate given recent work demonstrating the ability to index auditory speech encoding at different hierarchical levels using EEG (Di Liberto et al., 2015). Exploiting such representations of the speech signal in a multisensory scenario could provide a more fine-grained interpretation of integration effects along the cortical hierarchy, and inform how these effects may flexibly change depending on the quality of acoustic information. Thus, our goal is to relate the multivariate EEG to a multivariate representation of the speech.

Canonical correlation analysis (CCA) is a suitable technique for this since it allows us to examine the relationship between two sets of multivariate data. Specifically, CCA finds a transform of the stimulus and response which optimizes the correlation between the two. Here, we use this approach to examine integration effects at different hierarchical levels using a spectrotemporal and phonetic feature representation of the speech. The overarching aim of the work is to develop a framework for testing hypotheses about how the temporal dynamics and articulatory information from a speaker’s face help us to understand speech in challenging listening conditions.

Hide abstract

 


P2.78 Language, but not race induces vocal superiority in audiovisual emotion perception

Kawahara, M., Yamamoto, H.W. & Tanaka, A.
Tokyo Woman’s Christian University

Show abstract

For successful social interaction, humans need to perceive others’ emotion from their face and voice appropriately. Recent studies have demonstrated that such audiovisual emotion perception is modulated by cultural background. Tanaka et al. (2010) showed that adult Japanese people focus on vocal expression more than adult Dutch people. In our earlier experiments, we demonstrated that such cultural differences appear during childhood; both Japanese and Dutch preschoolers judge speakers’ emotion based mainly on facial expression, whereas Japanese, but not Dutch children come to focus on vocal expression with age as for in-group speakers. Then, what is the cue for Japanese children to focus on in-group members’ vocal expression? One of the candidate cues is speakers’ appearance (race), and the other is their language. To determine which possibility holds true, in the present study, we conducted experiments with Japanese children (5-12 years old) and adults. We showed them a video clip in which a Japanese or Dutch speaker expressed her emotion in the face and voice and asked to judge whether she is happy or angry. In the half of the video clips, their appearance was congruent with their language (e.g., Japanese appearance – Japanese language: a speaker seems to be Japanese people and speaks Japanese language), and they were incongruent in the other half (e.g., Japanese appearance – Dutch language: a speaker seems to be Japanese people but speaks Dutch language). The results showed that Japanese 5-6 years old focused on facial expressions of speakers regardless of the speakers’ appearance and language and that Japanese gradually shift their attention from facial to vocal expressions with age during childhood only when they observed Japanese appearance – Japanese language speakers or Dutch appearance – Japanese language speakers. It is suggested that speakers’ language is the cue for Japanese children to focus on vocal expression in emotion perception.

Hide abstract

 


P2.79 The visual speech advantage in noise: Effects of listener age, seeing two talkers and spatial cuing

Beadle, J., Davis, C. & Kim, J
The MARCS Institute, Western Sydney University

Show abstract

In noisy conditions, seeing a talker’s face facilitates speech recognition (the visual speech advantage). Movement from an individual’s face and mouth supplements congruent auditory information, allowing better speech perception. However, what happens when two talkers (one relevant and one irrelevant) are presented? Would seeing another talker reduce the visual advantage for younger and older listeners? And would cuing the relevant talker overcome this? To investigate these questions, we recruited 24 younger adults (10 Females, MAge=24) and 24 older adults (12 Females, MAge=71) for a speech recognition experiment. Spoken sentences were mixed with speech shaped noise at -1 & -4dB SNRs and randomly presented in four visual display conditions: Baseline (a still face image); Standard visual speech (a video of a single relevant talker); Valid cue (videos of relevant and irrelevant talkers side-by-side); and Ambiguous cue (same as condition 3). The cue consisted of a white rectangle that appeared before the sentence and remained visible until the trial finished. The valid cue surrounded only the relevant talker; the ambiguous cue both videos. Participants were instructed to attend to the space inside the rectangle and type what they heard. Overall, recognition rates were highest for the Standard condition, and were poorer than Standard but better than Baseline when two talkers were presented. Younger adults performed better than older adults. Younger adults benefited from the Valid cue; older adults did not. The results suggest that focusing on a relevant talker is necessary for auditory and visual speech signals to integrate and provide a visual speech advantage. Further, older adults seem to be more susceptible to distraction from an irrelevant talker, even when a salient cue directing visual-spatial attention towards the relevant talker is presented. The role of attention in auditory-visual speech perception across the lifespan will be discussed.

Hide abstract

 


P2.80 Audiovisual Integration of Subphonemic Frequency Cues in Speech Perception

Plass, J., Brang, D., Suzuki, S. & Grabowecky, M.
Department of Psychology, University of Michigan

Show abstract

Visual speech can facilitate auditory speech perception, but it is unclear what visual cues contribute to these effects and what crossmodal information they provide. Because visible articulators (e.g., mouth and lips) shape the spectral content of auditory speech, we hypothesized that listeners may be able to infer spectrotemporal information from visual speech. To uncover statistical regularities that would allow for such crossmodal facilitation, we compared the resonant frequencies produced by the front cavity of the mouth to the visible shape of the oral aperture during speech. We found that the time-frequency dynamics of this oral resonance could be recovered with unexpectedly high precision from the changing shape of the mouth. Because both frequency modulations and visual shape properties are neurally encoded as mid-level perceptual features, we hypothesized that perceptual learning of this correspondence could allow for spectrotemporal auditory information to be recovered from visual speech without reference to higher-order speech-specific (e.g., phonemic or gestural) representations. Isolating these features from other speech cues, we found that speech-based shape deformations enhanced sensitivity for naturally co-occuring frequency modulations, although neither was explicitly perceived as speech and, therefore, unlikely to be represented with a speech-specific code. To test whether this type of correspondence could be used to enhance speech intelligibility, we degraded the spectral content of spoken sentences so that they were nearly unintelligible, but their amplitude envelope was preserved. Visual speech produced superadditive improvements in word identification, suggesting that obscured spectral content could be recovered from visual speech. This enhancement superseded enhancements observed when the amplitude envelope was degraded and spectral content preserved, suggesting that visual speech provided spectral information independently of any higher-order speech information that would have affected both conditions equally. Together, these results suggest that the perceptual system exploits statistical relationships between mid-level audiovisual representations of speech to facilitate perception.

Hide abstract

 


P2.81 Using infant-directed speech to convey meaning: prosodic correlates to visual properties of objects

Walker, P. & Bremner, G.
Lancaster University

Show abstract

Despite traditional assumptions that prosody contributes only to the structural organisation of spoken language, increasing evidence suggests that it plays a fundamental role in the communication and interpretation of ambiguous word meaning. Specifically, recent research suggests that speakers manipulate prosody in a way that reflects known crossmodal correspondence between visual and auditory sensory channels, such as the relationship between auditory pitch and visual brightness (i.e. higher-pitched sounds are associated with brighter objects than their lower-pitched counterparts). Given the prosodically rich and variable nature of infant-directed speech (IDS) in the company of novel language users, we predict that users of IDS manipulate prosody in an attempt to convey semantic information paralinguistically. To further establish how prosody is used in this enterprise, we explored the extent to which infant-directed speakers talk about novel objects that differ in one of five visual dimensions: size, angularity, brightness, height and thinness (all of which have been found to elicit visual-auditory crossmodal correspondences). In this experiment, adult users of IDS verbalised simple sentences containing a novel word (e.g. “Look at the timu one.”) in the presence or absence of meaning. The findings throw light on the functional significance of crossmodal correspondences in IDS.

Hide abstract

 


P2.82 Is integration of audiovisual speech fixed or flexible?

Tiippana, K., Kurki, I. & Peromaa, T.
University of Helsinki

Show abstract

Both auditory (A) and visual (V) articulation cues can be used in speech perception. Often, one sensory modality has a higher signal-to-noise ratio, providing more informative cues. Statistically optimal integration would weigh the cues according to how much information each modality provides, thus requiring an accurate estimate of the informativeness of the cues in the stimulus. Here, we studied the optimality of audiovisual (AV) speech integration using subthreshold summation paradigm. Moreover, we investigated the effect of a priori knowledge on integration: Is optimal weighting possible only when the observer knows the relative informativeness of A and V cues?

Thresholds for discriminating syllables [ka] and [pa] were measured with the method-of-constant-stimuli for auditory, visual and audiovisual stimuli presented in white noise by finding the auditory intensity and visual contrast level corresponding to 74% correct responses. Thresholds were measured in five stimulus conditions: unisensory A and V, and three different AV ratios. The five stimulus conditions were presented either randomly interleaved, or in blocks so that the observer knew which condition was measured. Integration was assessed by comparing the intensity of A and V components in an AV stimulus at threshold to unisensory thresholds. An optimal model predicts quadratic summation, i.e. AV thresholds proportional to the square root of the sum of squared A and V intensities at threshold. A suboptimal model with fixed weights predicts higher unisensory thresholds and, paradoxically, linear summation.

We found roughly quadratic summation of AV thresholds in both interleaved and blocked experiments. This suggests that cue weighting in AV integration is optimal and does not require a priori knowledge. However, the thresholds in the interleaved experiment were overall about 10% higher compared to the blocked experiment. This implies that estimating the cue information value without a priori knowledge imposes a general processing cost.

Hide abstract

 


P2.83 Perceiving your own shadowers’ speech

Chee, S., Dorsi, J., Rosenblum, L.D. & Dias, J.W.
University of California – Riverside

Show abstract

Listeners often imitate subtle aspects of a talker’s speech when producing a spoken response (e.g., Goldinger, 1998). This phonetic convergence has been observed not only with audio speech, but also with visual speech. Shadowed lipread speech sounds more similar to a shadowed talker’s speech than does unshadowed (read) speech (Miller, Sanchez, and Rosenblum, 2010). Phonetic convergence is also perceived across modalities in that raters judge audio speech based on shadowed responses of lipread speech from a talker as more similar to that talker’s lipread speech. Both audio and visual phonetic convergence may serve to enhance mutual comprehension during conversation (e.g., Pardo, 2006). In fact, it may be that converging toward a specific talker’s speech makes understanding easier for that talker. To test this possibility, experiments were conducted to examine whether talkers hear their shadowers better than individuals who shadowed different talkers. An initial study explores whether a talker would better understand shadowers who were asked to shadow the audio speech produced by that talker. Ten perceivers were recorded uttering 320 words. Groups of four shadowers were asked to listen to the perceiver’s words and say each ‘quickly and clearly’ as they were being recorded. After a few months, the original perceivers returned and listened to words recorded by their own shadowers, as well as shadowers who listened to another perceiver’s words. Perceivers heard the words in noise and were asked to identify and respond with what the word was. Initial results indicate that perceivers more easily understood words produced by shadowers who shadowed them, suggesting that phonetic convergence may serve comprehension. Follow up experiments will test whether this same advantage holds for audio words based on shadowing responses of lipread speech, and whether a talker can more easily lipread words if those words were shadowed from that talker.

Hide abstract

 


P2.84 Examining Modality Differences in Timing to test the Pacemaker Explanation

Williams, E.A., Yüksel E.M., Stewart, A.J. & Jones, L.A.
University of Manchester

Show abstract

We investigated the classic effect that “sounds are judged longer than lights”, when the two modalities are of equal duration (Goldstone et al., 1959). Recently, durations of vibrations have been found to be judged somewhere between the two (Jones et al., 2009). This pattern has also been found for temporal difference thresholds, where sensitivity is highest for sounds, followed by vibrations, and lowest for lights (Jones et al., 2009). Scalar Expectancy Theory explains these findings as the result of a central pacemaker which pulses at a faster rate for sounds, followed by vibrations, and slowest for lights (Wearden et al., 1998; Jones et al., 2009). The current work aimed to test this assertion by replicating the estimation and threshold tasks of Jones et al. (2009) and correlating performance across tasks for each modality.

We used this same approach to investigate the filled-duration illusion: continuous sensory stimuli are judged to be longer than ‘unfilled’ stimuli, which are delineated by short beeps or flashes. Again, this difference presents itself in both estimation and threshold tasks (Wearden et al., 2007; Rammsayer, 2010), and it has similarly been argued that the pacemaker pulses at a faster rate for filled than unfilled stimuli (Wearden et al., 2007). To round off these experiments, we performed computational modelling to investigate an alternative explanation to pacemaker rate.

Our results can be summarised as three key findings. First, the classic patterns of pacemaker rates are not as pervasive as originally thought; up to 27% of participants exhibited alternative patterns. Second, if pacemaker rate is a driving factor for estimates and thresholds, its effect appears to be greater for unfilled than for filled intervals. Finally, differences in internal variability could be a contributing factor to estimation-slope effects between stimulus modalities and stimulus types, but it cannot solely explain these differences.

Hide abstract

 


P2.85 Synesthesia: Seeing the world differently, a phenomenological report

Steen, C. J.
Touro College and University System

Show abstract

On July 16, 1915, the American painter Charles Burchfield told another of his secret perceptions to his journal. He wrote: “It seems at times I should be a composer of sounds, not only of rhythms and colors. Walking under the trees, I felt as if the color made sound.” There are stories about some synesthetes who were shunned when they blurted out their secret perceptions – as it is said that van Gogh was by his piano teacher. Others were diagnosed as being almost mad. A few lucky ones, like Kandinsky, knew that they were not alone and that their joined perceptions had a name so they were free to explore, and create, from their experiences. Today we know that synesthetic perceptions are real and experienced by about 1 person in 23. But what do synesthetes actually experience?

I am a synesthete with five different forms of synesthesia. I have the common forms of colored graphemes, and moving, colored sounds, and the rarer forms of colored smells, and moving, shaped, colors from touch or pain. I use my experiences to create my art, and to diagnose my health. My work is in several museums and the Library of Congress.

In my paper I will discuss some of the 60 known common, and rare, forms of synesthesia, show what synesthetes see, and mention some ways in which synesthesia can be used to navigate in the world besides providing artistic inspiration. I will show examples of my synesthetic perceptions by means of a few very short videos that animate the linear Kluver form constants diagrams to show what synesthetes really see. I believe that scientists can learn a great deal from the phenomenology of synesthetic perceptions and the different ways synesthetes use their abilities.

Hide abstract

 


P2.86 Perceived depth reversal in a motion parallax display when observers fixated on different depth planes

Sakurai, K., Neysiani, N.Z., Beaudot, W. & Ono, H.
Tohoku Gakuin University

Show abstract

When an intersection of visual axes is not on the screen of an observer-produced motion parallax display, a head movement adds common motion to the stimulus elements on the retinae. We previously reported the perceived depth order was affected when this common motion was artificially added to a motion parallax display (Sakurai, Furukawa, Beaudot, Ono, 2017). We investigated whether the common motion caused by real depth difference of fixation affects the perceived depth order. The random-dot patterns subtended 11 deg (height) x 12 deg (width) were presented on the screen of motion parallax display, which produced 2 cycles of sinusoidal corrugated surfaces with 1.75, 3.5, 7 cm (from peak to trough) depth. Through polaroid filters, 5 observers monocularly viewed the random-dot patterns at a viewing distance of 114 cm and binocularly fixated a red LED light at viewing distances of 57, 80.4, 114, 161.2, 228 cm with moving their heads back and forth 12 cm laterally. The task was to report whether the region immediately below the fixation point appeared convex or concave. In the conditions of 3.5 and 7 cm depth at viewing distances of 114, 161.2, 228 cm, the apparent depth order consistent with that defined by the motion parallax alone reached more than 90 % responses. In the same conditions at viewing distances of 57, 80.6 cm, however, the apparent depth order consistent with that defined by the motion parallax dropped to 70 % responses. This was not the case in the condition of 1.75 cm depth. These results suggest that there is a conflict between depth cues of motion parallax and retinal velocity in the conditions at 57, 80.6 cm viewing distances.

Acknowledgments: Supported by JSPS Grant-in-Aid for Scientific Research (B) Grant Number 25285202 and (C) Grant Number 17K04498, and by Cooperative Research Project of the Research Institute of Electrical Communication, Tohoku University.

Hide abstract

 


P2.87 Central fatiguing mechanisms are responsible for decreases in hand proprioceptive acuity following shoulder muscle fatigue

Sadler, C.M. & Cressman, E.K.
University of Ottawa

Show abstract

Muscle fatigue is a complex phenomenon that consists of central and peripheral mechanisms which contribute to local and systemic changes in muscle performance. These effects seem to alter processing of afferent feedback from local and non-local muscles, yet it is currently unclear how proximal muscle fatigue affects proprioceptive acuity of the distal limb. The purpose of the present study was to assess the effects of shoulder muscle fatigue on participants’ ability to judge the location of their hand using only proprioceptive cues. Participants’ (N = 16) limbs were moved outwards by a robot manipulandum and they were instructed to estimate the position of their hand relative to one of four visual targets (two near, two far). This estimation task was completed before and after a repetitive pointing task was performed to fatigue the shoulder muscles. To assess central versus peripheral effects of fatigue on the distal limb, the right shoulder was fatigued and proprioceptive acuity of the left and right hands were tested. Results showed that there was a significant decrease in proprioceptive acuity of both hands after the right shoulder was fatigued, with no change to the variability of proprioceptive estimates. A control experiment (N = 8), in which participants completed the proprioceptive estimation task before and after a period of quiet sitting, ruled out the possibility that the observed bilateral changes in proprioceptive acuity were due to a practice effect. Together, these results indicate that proximal muscle fatigue decreases proprioceptive acuity in both hands, suggesting that central fatigue mechanisms are responsible for changes in afferent feedback processing of the distal limbs.

Acknowledgments: This research was supported by a NSERC Discovery Grant to EKC

Hide abstract

 


P2.88 Co-designing Serious Games in the Surgical Environment to Address Multisensory Communication Styles and Team Experiences

Jordan, C.
OCAD University

Show abstract

Preventable medical errors in the operating room account for the eighth leading cause of death in North America. While process improvements have been made, the larger system of communications and information exchanges amongst surgical team members remain poorly designed. Communication within the operating room must be clearly and efficiently delivered in order to prevent medical errors, mortality or future health complications for the patient. Current forms of communication are generally invisible and ambiguous during high-stress situations or medical emergencies, ensuring this system offers a false sense of safety and encourages similar mistakes to occur in the future. In order to decrease medical errors and improve patient safety, the complexities between verbal, visual, tactile, sonic and spatial understanding amongst team members must be considered. As technical skills are prioritized within the surgical environment, communication is considered a non-technical skill with minimal training provided. A gap remains in the ability of teams to learn and safely apply communication skills within the surgical environment without compromising patient safety. To create a more engaging learning environment, the use of serious games can aid in the development and understanding of multisensory communication styles. Serious games provide a safe and reliable environment where teams can practice real-life scenarios through role play, time constraints, humour and competition. If mismatches in communication techniques and human behaviour amongst team-members can be identified and the situation awareness of the team can be adaptable to the requirements of the procedure, surgical safety can be improved.

Hide abstract

 


P2.89 Do movement sequences and consequences facilitate dual adaptation of opposing visuomotor rotations?

Ayala, M.N. & Henriques, D.Y.P.
York University

Show abstract

Investigating the factors that affect the ability to learn or regain motor skills within a limited time-frame are of great interest to both sports and rehabilitation domains. When planning movements, the human central nervous system can actively compensate and adapt to two or more distinct perturbations simultaneously (“dual adaptation”) but this is only possible when each visuomotor map is associated with a sufficient contextual cue. It has recently been shown that cueing the motor system by including a lead-in or follow-through movement (or even a sequence including both) prior to the perturbed movement, can facilitate learning of opposing force-field perturbations. Thus, the additional movement segment predicts the appropriate visuomotor map for the task. Here, we investigate whether that additional movement sequence requires an active motor component or if a visual consequence is sufficient to facilitate dual adaptation of opposing visuomotor rotations. In the sequence experiment, participants experienced opposing rotations within the same experimental block, each associated with a distinct sequence (i.e. arm moves to the left or right). To see whether a passive visual consequence was sufficient, in the follow-up experiment, each rotation was associated with a target consequence (i.e. target moves to the left or right). To compare the extent of dual learning, two more groups each learned a single rotation with the same previous sequence-rotation association. Together these findings show whether active movement sequences and passive consequences are incorporated into motor planning and execution, and can thus facilitate dual learning of opposing visuomotor rotations.

Hide abstract

 


P2.90 Crossmodal correspondences are spontaneously used to communicate in a coordination task

Vesper, C., Schmitz, L. & Knoblich, G.
Central European University (Budapest); Ludwig-Maximilians-UniversitÀt (München)

Show abstract

Prior research on crossmodal correspondences has shown that magnitude-related stimuli features from different sensory modalities are consistently associated and lead to facilitation of individual perceptual performance [1-4]. Here, we tested whether people rely on these crossmodal associations to communicate non-verbally in social interactions.

In the context of social interactions, actors have been found to send non-verbal signals to support coordination: They systematically modulate visually perceivable parameters of their goal-directed movements (e.g., amplitude) in a way that allows co-actors to more easily predict the goal location of these movements [5-7]. Thus, actors send visual signals referring to visual locations. In the present study [8], we investigated whether non-verbal signals are also used to communicate across different sensory modalities.

To this end, two participants – seated at opposite sides of a table and unable to observe each other – were instructed to perform reaching movements to one out of three target locations aligned horizontally on the table in front of each participant. Their coordination goal was to move to corresponding target locations. Only one participant (‘Leader’) received information about the correct target location whereas the other participant (‘Follower’) did not receive any information. The Leader moved first, after a tone had marked the beginning of the trial. A second tone was triggered when the Leader arrived at the target. Subsequently, the Follower attempted to move to the corresponding target.

The results showed that Leaders systematically adjusted the duration of their actions, thereby modulating the pause between the start tone and the target arrival tone (Exp. 1) or the duration of the target arrival tone (Exp. 2), to communicate the correct target location to uninformed Followers. Specifically, Leaders used longer auditory durations to signal farther visual locations.

These findings demonstrate that people spontaneously use magnitude-related crossmodal associations to communicate non-verbally during interpersonal action coordination [9].

References

[1] Smith, L. B., & Sera, M. D. (1992). A developmental analysis of the polar structure of dimensions. Cognitive Psychology, 24(1), 99-142.

[2] Parise, C., & Spence, C. (2013). Audiovisual cross-modal correspondences in the general population. The Oxford handbook of synaesthesia, 790-815.

[3] Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971-995.

[4] Spence, C., & Deroy, O. (2013). How automatic are crossmodal correspondences? Consciousness and Cognition, 22(1), 245-260.

[5] Pezzulo, G., Donnarumma, F., & Dindo, H. (2013). Human sensorimotor communication: A theory of signaling in online social interactions. PloS ONE, 8(11), e79876.

[6] Sacheli, L. M., Tidoni, E., Pavone, E. F., Aglioti, S. M., & Candidi, M. (2013). Kinematics fingerprints of leader and follower role-taking during cooperative joint actions. Experimental Brain Research, 226(4), 473-486.

[7] Vesper, C., & Richardson, M. J. (2014). Strategic communication and behavioral coupling in asymmetric joint action. Experimental Brain Research, 232(9), 2945-2956.

[8] Vesper, C., Schmitz, L., & Knoblich, G. (2017). Modulating action duration to establish nonconventional communication. Journal of Experimental Psychology: General, 146(12), 1722.

[9] Knoeferle, K. M., Woods, A., Käppler, F., & Spence, C. (2015). That sounds sweet: Using cross”modal correspondences to communicate gustatory attributes. Psychology & Marketing, 32(1), 107-120.

Hide abstract

 


P2.91 Explicit contributions to visuomotor adaptation transfer between limbs regardless of instructions

Bouchard, J.M. & Cressman, E.K.
University of Ottawa – School of Human Kinetics

Show abstract

Both implicit and explicit processes contribute to visuomotor adaptation (i.e., adapting one’s movements in response to experiencing altered visual feedback of the hand’s position). Moreover, visuomotor adaptation seen in the trained hand transfers to the untrained hand. In the current study, we asked if both implicit and explicit processes transfer between limbs and if transfer is dependent on being provided with explicit instructions on how to counteract the viusomotor distortion. To probe the permanency of implicit and explicit contributions to visuomotor adaptation, we tested for retention on a second testing day (Day 2). Twenty eight right-handed participants were divided into 2 groups (Strategy and Non-Strategy). All participants reached to three visual targets with a cursor on a screen that was rotated 40° clockwise relative to their hand motion. Participants in the Strategy group were instructed on how to counteract the visuomotor distortion. Following rotated reach training, participants completed two types of reaches without visual feedback. Specifically, participants: (1) aimed so that their hand landed on the target (to assess implicit contributions) and (2) used what was learned during reach training so that the cursor landed on the target (to assess explicit contributions). These no-cursor trials were performed with the left (trained) and right (untrained) hands immediately after reach training and again 24 hours later. As expected, results revealed that the Strategy group displayed greater explicit contributions and less implicit contributions in comparison to the Non-Strategy group in the trained hand after training. In addition, explicit contributions were transferred to the untrained hand for both groups, and were retained on Day 2 in contrast to implicit contributions, which were not transferred between limbs, or retained for either group. Together, these results suggest that the intermanual transfer of visuomotor adaptation may be driven by explicit processes that are relatively stable over time.

Acknowledgments: This research was supported by a NSERC Discovery Grant to EKC

Hide abstract

 


P2.92 Contributions of online and offline processes to reaching in typical versus novel environments

Wijeyaratnam, D.O., Chua, R. & Cressman, E.K.
University of Ottawa

Show abstract

Human movements are remarkably adaptive, such that we are capable of completing movements in a novel environment with similar accuracy to those performed in a typical environment. In the current study we examined if the control processes underlying movements under typical and novel visuomotor conditions are comparable. Sixteen participants were divided into 2 groups, one receiving continuous visual feedback during all reaches (CF), and the other receiving terminal feedback regarding movement endpoint (TF). Participants trained in a virtual environment by performing 150 reaches to 3 targets when (1) a cursor accurately represented their hand motion (i.e., typical environment) and (2) a cursor was rotated 45 degrees clockwise relative to their hand motion (i.e., novel environment). Analyses of end-point based measures over time revealed that participants were able to demonstrate similar levels of performance (i.e., movement time and angular errors) regardless of visual feedback or reaching environment by the end of reach training. Furthermore, a reduction in variability across several measures (i.e., reaction time, movement time, time to peak velocity, time after peak velocity, and jerk score) over time showed that participants improved the consistency of their movements in both reaching environments. However, participants took more time and were less consistent in initiating their movements when reaching in a novel environment compared to reaching in a typical environment, even at the end of the training trials. As well, angular error variability was also consistently greater when reaching in a novel environment across trials, and within a trial (i.e., when comparing error variability at different proportions of the movement trajectory: 25%, 50%, 75% and 100%). Together, the results suggest a greater contribution of offline control processes and less effective online corrective processes when reaching in a novel environment compared to when reaching in a typical environment.

Acknowledgments: Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC); awarded to the last author.

Hide abstract

 


P2.93 Interindividual Differences in Eye Movements Made During Face Viewing Are Consistent Across Task And Stimulus Manipulations

Wegner-Clemens, K., Rennig, J., Magnotti, J.F. & Beauchamp, M.S.
Baylor College of Medicine

Show abstract

There is substantial interindividual variability in the eye movements made by humans viewing faces: some participants spend more time fixating the mouth of the viewed face, while others spend more time fixating the eyes. To determine the consistency of these interindividual differences, 41 participants viewed faces under different stimulus/task manipulations while their eye movements were recorded with an infrared eye tracker (EyeLink 1000 Plus, SR Research Inc.). The two stimulus conditions consisted of a dynamic face condition (two-second videos of one of four talkers speaking an audiovisual syllables) and a static face condition (still frames from the same videos). The two task conditions consisted of a speech task (reporting the identity of the monosyllable) and a gender task (reporting the gender of the talker’s face). Fixation data was subjected to a two-dimensional principal components analysis (PCA). The first PC (PC1) accounted for 42% of the total variation and consisted of a positive peak around the eyes of the talker and a negative peak around the mouth. The PC1 values was for each participant was correlated across conditions. For the same stimulus/different task manipulation (dynamic stimuli + speech vs. gender task) there was a significant correlation between participant values (r = 0.55, p = 0.0001). For the different stimulus/same task manipulation (dynamic vs. static stimuli + gender task), there was also significant correlation between participant values (r = 0.53, p = 0.0003). For the different stimulus/different task manipulations (dynamic stimulus with speech task vs. static stimulus with gender task) the correlation was weaker (r = 0.31, p = 0.05). These results demonstrate that participants’ internal preferences for face viewing interact with the viewed face and the behavioral demands of the task to determine eye movement behavior.

Hide abstract

 


P2.94 The relative role of visual self-motion feedback and biological sex identification on the sense of self

Schettler, A.(1), Holstead, I.(2), Turri, J.(1) & Barnett-Cowan, M.(3)
University of Waterloo, Department of Kinesiology

Show abstract

What constitutes the sense of self? In the last twenty years, there has been a rise in experimental research on bodily self-consciousness and the sense of self. Previous
experiments such as the rubber hand illusion and whole-body illusions have focused
on interpreting embodiment, an aspect of the sense of self, under the
representational approach, which focuses on neural representations to explain what
constitutes the sense of self. While the representational theory focuses on neural
representations, the sensorimotor theory focuses on sensorimotor functions and
voluntary action. Although there is no consensus among researchers of their
definitions, body schema is generally regarded as an unconscious, bottom-up,
dynamic representation, relying on proprioceptive information from the muscles,
joints, and skin during self-motion. On the other hand, the body image is a more
conscious, top down, cognitive representation, incorporating semantic knowledge of
the body and ones identity including biological sex. Here we investigated the degree
to which biological sex and self-motion have a role in the visual representation of
the self. To determine the relative role of biological sex and visual self-motion on the
sense of self, we constructed a novel virtual reality experiment that systematically
varied an avatar’s sex and motion, after which participants recorded judgments
about the relationship between themselves and the avatar. Over multiple trials,
participants were presented with pairs of avatars that visually represented the
participant (“self avatar”), or another person (“opposite avatar”). Additionally, the
avatars’ motion either corresponded to the participant’s motion, or was decoupled
from the participant’s motion. The results show that participants identified with i)
“self avatars” over “opposite avatars”, ii) avatars moving congruently with selfmotion
over incongruent motion, and importantly iii) identification with the
“opposite avatar” over the “self avatar” when the opposite avatar’s motion was
congruent with self-motion. Our results suggest that both biological sex and selfmotion
are relevant to the body schema and body image and that congruent bottomup
visual feedback of self-motion is particularly important for the sense of self and
capable of overriding top-down self-identification factors such as biological sex.

Hide abstract

 


P2.95 Multisensory stochastic facilitation: Effect of thresholds and reaction times

Harrar, V., Lugo, J.E., Doti, R. & Faubert, J.
School of Optometry, Université de Montréal

Show abstract

The concept of stochastic facilitation suggests that the addition of precise amounts of white noise can improve the perceptibility of a stimulus of weak amplitude. We know from previous research that tactile and auditory noise can facilitate visual perception, respectively. Here we wanted to see if the effects of stochastic facilitation generalise to a reaction time paradigm, and if reaction times are correlated with tactile thresholds. We know that when multiple sensory systems are stimulated simultaneously, reaction times are faster than either stimulus alone, and also faster than the sum of reaction times (known as the race model). Five participants were re-tested in five blocks each of which contained a different background noise levels, randomly ordered across sessions. At each noise level, they performed a tactile threshold detection task and a tactile reaction time task. Both tactile threshold and tactile reaction times were significantly affected by the background white noise. While the preferred amplitude for the white noise was different for every participant, the average lowest threshold was obtained with white noise presented binaurally at 70db. The reaction times were analysed by fitting an ex-Gaussian, the sum of a Gaussian function and an exponential decay function. The white noise significantly affected the exponential parameter (tau) in a way that is compatible with the facilitation of thresholds. We therefore conclude that multisensory reaction time facilitation can, at least in part, be explained by stochastic facilitation of the neural signals.

Hide abstract

 


Event Timeslots (1)

Saturday, June 16
-