T4.1 Rapid improvement of audiovisual simultaneity perception after short-term music training
Petrini, K., Di Mauro, M., Waters, G. & Jicol, C. Department of Psychology, University of Bath
Several studies have shown that the ability to detect audiovisual simultaneity strongly increases in musicians compared to non-musicians (e.g. Lee and Noppeney, 2011). However, the amount of training required to achieve an improvement in audiovisual simultaneity precision is still unknown. Here we examined whether a short training with a musical instrument would improve audiovisual simultaneity precision in two experiments. In the first one, 13 participants were trained with the drums for two hours, one-hour training session repeated in two separate weeks. Another group of 13 participants passively observed the trainer playing the drums. Before and after the training, or observation, participants were tested on an audiovisual simultaneity judgement task with nine levels of asynchrony and two types of stimuli (a simple flash and beep clip and a more complex face-voice clip). The second experiment was the same as experiment 1 except that 14 participants in the music training group were trained with the saxophone and 15 participants in the control group, who did not receive any training, completed the task at the same time as the music training group. We used an Independent Channels Model to fit the simultaneity judgement data for each participant (Garcia-Perez & Alcala-Quintana, 2012) and obtained measures of model parameters that correspond to sensory (e.g. rate of processing of the visual and auditory cues) and decisional processes (e.g. decision boundary or criterion for asynchrony judgments). We found that active training with the drums significantly improved the precision of both sensory and decisional processes while training with the saxophone only improved one sensory process. These results show a rapid effect of music training on audiovisual simultaneity perception (with the extent of this effect being instrument-dependent), and have important implications for rehabilitation therapies aimed at population with poor audiovisual simultaneity precision (e.g. autistics).
T4.2 The Multisensory Perception of Music
Russo, F. A. Ryerson University
Definitions of music tend to be unimodal in nature, often including some version of the idea that music is organized sound with aesthetic intent. Even philosophical treatise that attempt to define music in its broadest terms tend to overlook multisensory aspects. However, multisensory aspects abound. Most of the evidence for multisensory integration in music has been derived from audio-visual paradigms, but increasingly research has begun to consider the important role of somatosensation in music. In addition, sensorimotor networks have been implicated that give rise to interesting cascade efcects. For example, spontaneous motor activity in response to rhythm give rise to micro-fluctuations of posture and head position, which may in turn lead to vestibular stimulation. When such motor activity becomes entrained it has the potential to serve as its own channel of sensory input. As such, the perception of music is routinely multisensory, integrating inputs from auditory, visual, somatosensory, vestibular and motor areas. This review will commence with a brief consideration of the auditory-only classical view of music perception with a focus on lateralization, basic modularity, and pathways. The review will then turn to a systematic consideration of evidence regarding non-auditory and multisensory processing of three primary dimensions of music: pitch, timbre, and rhythm. For each dimension, behavioral and neuroscientific evidence will be considered and contextualized with respect to leading theories of multisensory perception.
T4.3 Tracking the evolution of learning a dance choreography in expert ballet dancers and people with Parkinson’s disease
DeSouza, J. F. X. York University
At IMRF 2013, we presented analysis on our project examining the neural networks involved in learning a new ballet to a novel piece of music over 8 months with a focus on auditory cortex (DeSouza & Bar 2012). We scanned subjects (expert dancers and people with PD) up to four times using fMRI. To date, we have now scanned 18 professional dancers from the National Ballet of Canada, 12 controls and 10 people with PD. All subjects visualized dancing to a one-minute piece of music during an 8-minute fMRI scan. Subjects were asked to visualize dancing their part while listening to the music. For more details of the training and performances for the first of 4 cohorts (see Bar & DeSouza, 2016). Results revealed a significant increase of BOLD signal, across the sessions in a network of brain regions including bilateral auditory cortex and supplementary motor cortex (SMA) over the first three imaging sessions, but a reduction in the fourth session at 34-weeks. This reduction in activity was not observed in basal ganglia (caudate nucleus). This increase and decrease in BOLD signal over learning is examined in more depth. Our results suggest that as we learn a complex motor sequence in time to music, neuronal activity increases until performance and then decreases by 34-weeks, possibly a result of overlearning and habit formation. Our findings may also highlight the unique role of basal ganglia regions in the learning of motor sequences. We now aim to use these functional regions of activation as seed regions to explore structural (DTI) and functional connectivity analysis.
T4.4 Improving visual recognition memory with sound
Glicksohn, A., Murray, C.A., & Shams, L. UCLA
Background: Many objects and events that we encounter in our daily lives produce both visual and auditory information. Previous studies reveal that recognition memory in one modality (e.g., an image of a clock) is enhanced if the object is initially encoded in both modalities (e.g., hearing and seeing a clock). It has been postulated that multisensory encoding results in ‘richer’ representations, which are later retrieved upon presentation of unisensory information. Question: Is this multisensory encoding advantage limited to natural audio-visual pairings (such as the image and sound of a clock), or does it extend to artificial associations (i.e., objects that are not naturally associated with sound)? Methods: We trained participants to associate a set of geometric patterns with brief melodies, and then tested them in a recognition memory task. In session 1, participants learned the association between shapes and melodies. In session 2, participants performed a memory task consisting of a study phase, delay, and test phase. During the study phase, half of the shapes were presented together with their associated sound, and half were presented in silence, and these trials were interleaved pseudorandomly. During the test phase, all shapes were presented in silence, and the task was to determine for each shape whether it was new or old. Half of the ÒoldÓ shapes had been initially presented audio-visually and half presented only visually. Results: Participants were better at recognizing shapes originally encoded audiovisually compared to encoded only visually. Results of a control study confirmed that the association between the shapes and melodies was necessary for the observed enhancement. Conclusion: These findings reveal that multisensory encoding advantage also applies to artificial audiovisual associations, and therefore, associating visual stimuli with sounds can be exploited to enrich visual encoding, and improve the subsequent retrieval of visual information.
T4.5 Horizontal variation in visual stimuli affects auditory pitch perception equally in musicians and non-musicians
Wilbiks, J. M. P. & Klapman, S. F. Mount Allison University
When assessing the pitch of auditory tones, participants respond more quickly and more accurately when high pitch is associated with a physically high response key, and low pitch with a low response key (Rusconi et al., 2006). This SMARC effect seems to be relatively universal when it comes to vertical orientation, but there is also evidence for a horizontal SMARC effect in musicians (Keller & Koch, 2008; Cho & Proctor, 2002). Timmers and Li (2017) suggested that since pianists (and to a lesser extent, other musicians) were trained to have a high/right versus low/left spatial representation of pitch because of the way a keyboard is laid out, they exhibit a horizontal SMARC effect as well.
To examine relative effects of auditory and visual factors on the vertical (vSMARC) and horizontal (hSMARC) effects, we employed a 3 (vertical height) x 3 (horizontal placement) x 3 (pitch height) x 3 (horizontal pitch location) design. For each trial, 9 white dots in a 3×3 grid were presented for 1500 ms, then one turned black as a tone, which could be high, medium, or low was presented in left, both, or right ears. Participants were asked to respond to the pitch of the cue by pressing one of three keyboard keys mapped horizontally.
Findings show that horizontal visual cues contribute to perception of pitch height in the expected manner (F(4,164) = 14.75, p < .001). Group comparisons suggest that, while task performance was significantly better in pianists and musicians than in non-musicians (F(2, 41) = 3.7535, p = .03185), pianists unexpectedly show a smaller hSMARC effect than musicians and non-musicians (F(8, 164) = 2.2593, p = .02570). Future research should manipulate factors such as pitch proximity and response mapping in order to provide optimal conditions for hSMARC to be observed.