Looming motion in depth may signal approaching threats or collisions which need the subjects to react-fight or flight-immediately. On the contrary, the receding motion may indicate successful escape. Quick and accurate discrimination of these two kinds of motion quickly and accurately is important for animals’ survival.
Both visual inputs and auditory inputs, such as size changes, loudness changes, are able to provide reliable information about looming motion and receding motion. Many unisensory studies have already found that there is a kind of asymmetry between the perception of looming and receding signals. That is to say, the looming events are more salient than receding ones in perception, which is called ‘the looming bias’. The looming bias can be seen in a lot of psychophysical tasks which are related to salience. However, no studies have examined whether the same looming bias also exists in cross-modal interaction.
Based on the improved experimental materials, the current research explores the influence of visual depth information on the discrimination of auditory signals in depth systematically by designing behavioral experiments and EEG experiment.
The first study contains three behavioral experiments, mainly focusing on the pattern of this cross-modal influence. To be specific, in the first experiment, we use different visual stimuli to explore how the discrimination of auditory motion is influenced by synchronous visual depth cues and whether there is a looming bias in this process. The results show that the congruency effect was larger in visual looming than visual receding condition. That means compared to the receding signals, visual looming signals are associated with a stronger influence on the discrimination of auditory motion in depth. The second experiment assessed the looming bias by using the psychophysical method. The results are consistent with the results of experiment one. In the third experiment, we manipulated the audio-visual synchronization to explore how synchronization modulates this cross-modal influence. Results show that when SOA equals zero, we can see the largest cross-modal influence, that is to say, when presented synchronously, the visual information will have the largest influence on the discrimination of auditory motion direction. And to some extent, this result can also indicate that the results of experiments one and two are due to multisensory integration, not caused by semantic priming and reaction bias. In the second study, we explored the brain mechanism: at which stage of perceptual decision-making does this cross-modal influence occur? We found that the congruent effect is related to semantic interactions and this cross-modal influence may occur in the decision formation stage.
Generally speaking, the current research found looming bias is widespread, which can help us to understand the characteristics of motion perception in depth. Our research explored how the motion signals from different modalities interact with each other at different processing stages, which provides evidence from depth dimension for those multisensory integration hypotheses.
修改评论