[Equipe Audition] [CNRS] [LPP Paris 5] [ENS] [DEC]

Séminaire Waka Fujisaki

Le lundi 10 septembre 2007, de 11h à 12h30.

Attention! Le séminaire aura lieu à l'UFR Biomédicale des Saints Pères, 45 rue des Sts Pères, 75006 Paris, Salle de réunion du LPP, H432, 4e étage. [plan]

Waka Fujisaki, chercheur au Sensory and Motor Research Group, Human and Information Science Laboratory, NTT Communication Science Laboratories, Atsugi, Japon

Mid-level, feature-based processing for cross-modal temporal synchrony perception

Temporal synchrony or simultaneity is a critical condition for integrating information presented in different sensory modalities. In the first part of my talk, I will briefly describe our studies conducted for the past several years to gain insight into the mechanism underlying audio-visual synchrony /simultaneity perception, by utilizing various experimental techniques. Our overall findings which include the recalibration of audio-visual simultaneity (Fujisaki et al., 2004), low temporal resolution of audio-visual synchrony-asynchrony discrimination (Fujisaki & Nishida, 2005; Fujisaki & Nishida, 2007) and a serial visual search of an audio-visual synchronous target (Fujisaki et al., 2006), are consistent with a hypothesis that our perception of audio-visual synchrony or simultaneity is mediated by a mid-level, attention-demanding process that compares salient temporal features individuated from within-modal signal streams. The latter part of my talk will focus more on the cross-modal temporal binding problem and its relation to attentional resolution. We measured the temporal synchrony-asynchrony discrimination performance for audio-tactile, visuo-tactile, and audio-visual pairs in order to explore whether cross-modal temporal binding is established by a common, attnetion-demanding mechanism regardless of the combination of modalities. The results showed that the temporal frequency limit of synchrony-asynchrony discrimination was similar for audio-visual and visuo-tactile pairs (~4 Hz for repetitive pulse trains), but that for audio-tactile pair was significantly higher (~10 Hz or above). This seems to disagree the single common mechanism hypothesis. However, regardless of the modality pairs, performance was higher for single pulses than for repetitive pulse trains (temporal crowding effect), performance was affected little by a change in matching feature (feature invariance), and the matching feature was under influence of attentive feature selection. These findings suggest that the principle underlying temporal synchrony judgment may be common, with the performance being limited by the attentional temporal resolution of each modality.


[Equipe Audition] [CNRS] [LPP Paris 5] [ENS] [DEC]